Platform Installation and Administration

AppDynamics Application Intelligence Platform Version 4.2.x

Copyright © AppDynamics 2012-2017 Page 1 Platform Installation and Administration ...... 4 Supported Environments and Versions ...... 8 Download AppDynamics Software ...... 37 Platform Installation Quick Start ...... 38 Install the Controller ...... 39 Controller System Requirements ...... 48 Controller Sizing FAQ ...... 54 Tuning for Large Scale Deployments ...... 55 Controller Port Settings ...... 59 Configure for the Controller ...... 62 Install the Controller as a Linux Service ...... 65 Configure Windows for the Controller ...... 66 Deploy the Controller to Production ...... 67 Deploy with a Reverse Proxy ...... 68 Troubleshooting Controller Issues ...... 76 Uninstall the Controller ...... 81 Upgrade the Controller ...... 82 Install the Events Service ...... 86 Events Service Sizing and Capacity Planning ...... 88 Install the Events Service on Linux ...... 90 Install the Events Service on Windows ...... 99 Load Balance Events Service Traffic ...... 108 Connect to the Events Service ...... 115 Upgrade the Events Service ...... 116 Backup and Restore Events Service Data ...... 118 Install the EUM Server ...... 121 Install a Split (Production) EUM Server ...... 126 Install a Single Host (Demo) EUM Server ...... 130 Install and Host a Custom Geo Server for Browser RUM ...... 133 Troubleshoot EUM Server Installation ...... 139 Secure the EUM Server ...... 141 Configure the EUM Server ...... 146 Upgrade the EUM Server ...... 150 Administer the Controller ...... 152 Administering Users ...... 152 Administrative Users ...... 152 Reset Root User Password ...... 154 Configure Authentication Using LDAP ...... 154 Configure Authentication Using SAML ...... 160 Configure SAML for OneLogin ...... 166 Configure SAML for Okta ...... 169 Configure SAML for Microsoft Active Directory on Azure ...... 171 Configure SAML for Microsoft Active Directory Federation Services ...... 172 Disable SAML Authentication for an Account ...... 175 Start or Stop the Controller ...... 175 Controller Data and Backups ...... 178 Controller Data Backup and Restore ...... 179 Controller Disk Space and the ...... 183 Database Size and Data Retention ...... 183 Controller High Availability ...... 186 Using the High Availability (HA) Toolkit ...... 188 Platform Version Information ...... 198 Configure the Email Server ...... 199 Access the Administration Console ...... 200 Modify GlassFish JVM Options ...... 201 Controller Tenant Mode and Accounts ...... 204 Customize System Notifications ...... 206 Migrate the Controller ...... 207 Controller Logs ...... 210 Controller Dump Files ...... 211 Security ...... 212 Controller SSL and Certificates ...... 213 Configure the Security Protocol ...... 218 Configure an SSH Key for Controller Access ...... 220 Mutual Authentication ...... 222 Cloud Services ...... 225 Cloud Auto-Scaling ...... 226 Compute Clouds ...... 227 Machine Images and Instances ...... 228 Create a Workflow and Workflow Steps ...... 229 Add Tasks for Workflow Steps ...... 230 Custom Tasks ...... 232

Copyright © AppDynamics 2012-2017 Page 2 Custom Cloud Connectors ...... 235 Platform as a Service Integrations ...... 243 Integration Modules ...... 243 Integrate AppDynamics with DB CAM ...... 243 Integrate AppDynamics with Splunk ...... 246

Copyright © AppDynamics 2012-2017 Page 3 Platform Installation and Administration

On this page:

SaaS Administration Topics On-Premises AppDynamics Platform Architecture Hybrid AppDynamics Platform Architecture

This section provides information on installing, configuring, and administering the AppDynamics Platform on premises. The platform includes the AppDynamics Controller and Events Service, the long term unstructured document storage component for the AppDynamics Platform.

SaaS Administration Topics

While most of the topics in this section apply to an on-premises platform deployment, it covers a few topics applicable to SaaS Controller as well. These include:

Administering Users Cloud Auto-Scaling

On-Premises AppDynamics Platform Architecture

AppDynamics gives you complete, end-to-end visibility on the performance of your environment. The following diagram gives you a high-level view of an on-premise complete, full-scale AppDynamics Application Intelligence Platform deployment. It shows how the parts of the deployment connect to provide application, database, infrastructure, end user monitoring, and more.

Copyright © AppDynamics 2012-2017 Page 4 Depending on the scale of your deployment, your requirements, and the products you are using, your own deployment is likely to consist of a subset of the one shown.

How the Components Work Together

The diagram illustrates some of the installation and network considerations for deploying AppDynamics, which are further outlined in the following table: Component Description See documentation for... Application Performance Management Install the Controller Instrument Applications App Server Agents attached to the monitored

Copyright © AppDynamics 2012-2017 Page 5 applications send data directly to the

Controller.

Server Monitoring Install the Controller Install the Standalone Machine Machine Agents reside on monitored servers and Agent report data directly to the

Controller. Application Analytics An analytics plugin embedded in the Install the Controller Instrument Applications App Server agent communicates with a local Installing Agent-Side Components Install the Events Service

Analytics Agent instance. One or more Analytics Agent in a deployment send data to the

Events Service cluster. The Analytics Agent is bundled with the Machine Agent but can be installed and run individually as well. Database Monitoring The Install the Controller Installing the Database Agent Database Agent connects by JDBC to monitored (Optional) Install the Events Service . The Database Agent sends data to the Controller and the Events Service instance bundled with the

Controller by default. For scalable and redundant storage, the Database Agent can be configured to use the clustered

Events Service. End User Monitoring For an on-premises EUM installation, you configure Install the Controller a connection to the web and mobile real user Install the EUM Server monitoring agents to the on-premises Install the Events Service

EUM Server. The EUM Server sends data to the

Events Service cluster.

Data Storage

Data is stored in the following locations:

APM configuration and metric data in the Controller MySQL database EUM configuration and metric data in the Controller MySQL database EUM event data in the Events Service Transaction and log analytics data in the Events Service

Hybrid AppDynamics Platform Architecture

In a hybrid deployment of the AppDynamics Platform, the (9) EUM RUM Service, the (10) EUM Events Service, and the (11) EUM Synthetic Monitoring Service are SaaS hosted. The Controller and Events Service run on-premise. APM, Database, and Infrastructure metrics report to the on-premise Controller. Transaction and Log Analytics report to the on-premise Events Service cluster. EUM

Copyright © AppDynamics 2012-2017 Page 6 agents send beacon data to SaaS endpoints while the SaaS Analytics service stores EUM event data.

Represents the platform components of Synthetic Monitoring, which consists of multiple connections. For more information about Synthetic Monitoring, see Browser Synthetic Monitoring.

Copyright © AppDynamics 2012-2017 Page 7 Data Storage

Data is stored in the following locations:

APM configuration and metric data in the on-premise Controller MySQL database EUM configuration and metric data in the on-premise Controller MySQL database EUM event data in the SaaS-hosted Events Service Transaction and log analytics data in the on-premise Events Service

Supported Environments and Versions

On this page:

Supported Platform Matrix for the AppDynamics Controller Supported Platform Matrix for the Agent Supported Platform Matrix for the .NET Agent Supported Loggers for the .NET Agent Supported Platform Matrix for the PHP Agent Supported Platform Matrix for the Node.js Agent Supported Platform Matrix for the Python Agent Supported Platform Matrix for the Apache Server Agent Supported Platform Matrix for the and C++ Agent Supported Platforms for the Standalone Machine Agent Browser Requirements for Sessions Supported Platform Matrix for Mobile RUM Supported Platform as a Service (PaaS) Providers

This page provides an aggregated view of the system requirements for the Controller and agents.

This page lists the third-party platforms and other types of technologies supported by AppDynamics. AppDynamics does not support platforms or technologies that are not listed here or that are noted as unsupported.

AppDynamics agents and the Controller are not supported on machines that use Power Architecture processors, including PowerPC processors.

Supported Platform Matrix for the AppDynamics Controller

The Controller is supported on the following operating systems:

Linux (32 and 64 bit) (32 and 64 bit)

Red Hat Enterprise Linux (RHEL) 6.1, 6.2, 6.3, 6.4, Windows Server 2003 6.5, 6.6, 6.7, 7.0, 7.2, and 7.3 Windows Server 2008, Windows Server 2008 R2 CentOS 5.9, 6.1, 6.2, 6.3, 6.4, 6.5, 6.6, 6.7, and 7.0 Windows Server 2012 R1 Standard and Datacenter, Windows Fedora 14 Server 2012 R2 Standard and Datacenter Ubuntu 8, 12, 14 Windows 7 Pro Open SUSE 11.x Windows 8 SUSE Linux Enterprise Server 12 Cloud: Amazon EC2, Rackspace, Azure

You can use the following file systems for Controller machines that run Linux:

ZFS EXT4 XFS

Supported Web Browsers for the Controller UI

The AppDynamics UI is an HTML 5-based browser application that works best with the latest version of any modern browser. The

Copyright © AppDynamics 2012-2017 Page 8 Controller UI has been tested with the following browsers and versions:

IE 9+ 6+ Chrome 16+ Firefox 6+ Microsoft Edge

Opera and older versions of Firefox, IE, and Safari browsers may still operate, but some features may not display as intended.

The Controller UI requires Flash Player 10 or greater; AppDynamics recommends version 11.

Certain types of ad blockers can interfere with features in the Controller UI. We recommend disabling ad blockers while using the Controller UI.

LDAPv3 Support

You can delegate Controller UI authentication and authorization to external directory servers that comply with LDAP (Lightweight Directory Access Protocol) version 3.

While the Controller should be able to work with any LDAPv3-compliant server, it has been verified against these LDAP products:

Microsoft Active Directory for Windows Server 2008 SP2+ OpenLDAP, 2.4+

Supported Platform Matrix for the Java Agent

Notes:

A dash ("-") in a table cell indicates that this column is not relevant or not supported for that particular environment. In cases where no version is provided, assume that all versions are supported. Contact AppDynamics Sales for confirmation. For environments that require additional configuration, a separate table describing or linking to configuration information follows the support matrix. For environments supported by AppDynamics End User Monitoring, see Supported Environments and Versions - Web EUM. For environments supported by AppDynamics Server Monitoring, Standalone Machine Agent Requirements and Supported Environments.

JVM Support

The AppDynamics Java Agent supports applications running with a JRE or a full JDK. The Java Agent supports the following JVM types and versions.

Vendor Implementation Version Object Automatic Custom Memory Structures Instance Leak Tracking Detection

Content Access Requires JVM Inspection Tracking Restart?

Azul Zing 15.x Linux x64 Yes4 Yes - - -

Azul Zulu 7.x Linux x64 Yes4 Yes - - -

Oracle Java HotSpot 7 Update Solaris Sparc 64, - - - - - 45+ Windows, Linux

Oracle Java SE 81 Solaris Sparc 64, Yes Yes Yes Yes Yes (Standard Windows, Linux Edition)

BEA JRockit 1.5 - - Yes Yes Yes Yes

BEA JRockit 1.6, 1.7 - - Yes Yes - -

Oracle JRockit JVM 28.1+ Linux Intel 64 Windows - - - - -

IBM JVM 1.5.x, 1.6.x, - - Yes, as noted Yes, as noted2,3 - - 1.7.x 2

Copyright © AppDynamics 2012-2017 Page 9 SUN JVM 1.5, 1.6, 1.7 - Yes Yes Yes Yes -

Open OpenJDK 1.6 Linux, windows, - Yes - - - Source everywhere

HP OpenVMS ------

Notes:

1 For examples of instrumenting new language constructs in Java SE 8, see Instrumenting Java 8 Constructs.

2 Object instance tracking, automatic leak detection, and custom memory structure monitoring are not supported with the IBM Java Agent on an IBM JVM. It's possible to work around this limitation by using the Java Agent for the Sun and JRockit JVM on an IBM JVM, but doing so can result in a negative performance impact.

3 For IBM JVMs, a restart is required after configuring the custom memory structure. 4 Object Instance Tracking is supported only for version 4.2.4 and after. In addition, you must also pass the JVM argument (XX:+ ProfileLiveObjects) for versions higher than 4.2.4.

JVM Language Frameworks Support

No additional configuration is required for these frameworks.

Vendor JVM Version Correlation/ Exit Transports Notes Language Entry Points Framework Points

Open Source / Akka Actor 2.1 – 2.3 Yes Yes Netty Remoting exit/entry supported. Typesafe Reactive Platform Persistence (experimental module in v2.3) is not currently supported.

Open Source Groovy - Yes Yes

Open Source / Play for Scala 2.1 – 2.3 Yes - HTTP over Includes framework specific entry points Typesafe Reactive Netty Platform

Open Source / Spray toolkit 1.1.x Yes Yes HTTP Entry points are detected and configurable as servlet entry Typesafe Reactive (Spray.io) point and exit points as HTTP exits. Platform

Pivotal - - - -

The Typesafe Reactive Platform is a JVM-based runtime and collection of tools used to build reactive applications. This includes Scala, Play, Akka, and Spray.io.

Application Servers

The Java Agent supports the following application servers. Some require additional configuration. Click the link on the server or OSGi Runtime for information about additional requirements or related configuration topics. The agent usually discovers application servers as an entry point.

Vendor / Version SOA RMI Supported JMX Entry OSGi Runtime Protocol Points

Apache Felix - - - - Yes

Apache Sling - - - - Yes

Apache Tomcat 5.x, 6.x, 7.x, 8.x - - Yes

Apache 1.x - 4.x - - - -

Adobe Cold Fusion 8.x, 9.x - No - Yes

Equinox - - - - Yes

Eclipse 6.x, 7.x - - - -

Copyright © AppDynamics 2012-2017 Page 10 IBM InfoSphere 8.x - - - Yes

IBM WebSphere 6.1 JAX-WS - - Yes

IBM WebSphere 7.x JAX-WS Yes, detect and correlate Yes for Yes WebSphere PMI

IBM WebSphere 8.x JAX-WS Yes, detect and correlate - Yes

Open Liferay Portal - - - - - Source

Open JBoss Wildfly (formerly JBoss 4.x, 5.x, 6.x, Yes Yes Source Server) 7.x, 8.x

Sun/Oracle GlassFish Enterprise Server 2.x - - Yes Yes

Oracle GlassFish Server and 3.x, 4.x - - Yes for AMX Yes GlassFish Server Open Source Edition

Oracle and WebLogic Server 9.x+ JAX-WS Yes, detect and correlate Yes Yes BEA for 10.x

Red Hat JBoss Enterprise Application 6.11, 6.2.0, 7.x Yes Yes Server

Software webMethods 9.5, 9.6 - - - Yes AG

Tibco ActiveMatrix BusinessWorks 5.x - - - Yes Service Engine

Application Server (OC4J) - - Yes, detect and correlate - Yes for 10.x

- Grails, with Tomcat 7.x, Glassfish - - - - v3, Weblogic 12.1.1 (12c)

Notes:

Servlet 3.x detection is not supported.

Application Server Configuration

For application server environments that require additional configuration, this section provides some information and links to topics that help you configure the environment. Environments in the Application Server Support table that require additional configuration, link to the configuration table below.

Application Server Configuration Notes

Apache Felix OSGi Infrastructure Configuration

Apache Sling OSGi Infrastructure Configuration

Apache Tomcat Apache Tomcat Startup Settings

Apache Resin Resin Startup Settings

Apache Cold Fusion Requires configuration for transaction discovery; see Servlet Entry Points

Equinox OSGi Infrastructure Configuration

Eclipse Jetty Jetty Startup Settings

IBM InfoSphere IBM WebSphere and InfoSphere Startup Settings

IBM WebSphere IBM WebSphere and InfoSphere Startup Settings

Copyright © AppDynamics 2012-2017 Page 11 Sun GlassFish Enterprise Server Manually configure GlassFish JDBC connection pools using MBean attributes and custom JMX metrics

GlassFish Startup Settings Modify GlassFish JVM Options

Oracle GlassFish Server (including GlassFish Server GlassFish Startup Settings Open Source Edition) Modify GlassFish JVM Options

Oracle and BEA WebLogic Server Oracle WebLogic Startup Settings

Software AG webMethods webMethods Startup Settings

Tibco ActiveMatrix BusinessWorks Service Engine Tibco ActiveMatrix BusinessWorks Service Engine Settings

Open source JBoss Wildfly JBoss and Wildfly Startup Settings

Red Hat JBoss Enterprise Application Server JBoss and Wildfly Startup Settings

Red Hat JBoss JBoss and Wildfly Startup Settings

PaaS Providers

PaaS Buildpack Provider

Pivotal Cloud Java Buildpack 3.4 and higher. (See Using AppDynamics with Java Applications on Pivotal Cloud for a walkthrough Foundry of using the Java buildpack.)

Red Hat JBoss EAP 6.4 and WildFly 8.1 Docker images. For documentation and download information, see the AppDynamic Openshift 3 s Java APM Agent page on the Red Hat Customer Portal.

Message Oriented Middleware Support

The Java Agent supports the following message oriented middleware environments. Some require additional configuration. Click the link on the messaging server name in the following support matrix for information about additional configuration required or related configuration topics. Message oriented middleware servers are usually found by the Java Agent as an entry point.

Vendor Messaging Server Version Protocol Correlation/Entry Exit JMX Configuration Notes Points Points

Amazon Simple Queue Service - - Yes (correlation Yes - See "Amazon Simple Queue Service Backends" on Java (SQS) only) Backend Detection

Amazon Simple Notification Service - - No Yes - See "Amazon Simple Notification Service Backends" on J (SNS) ava Backend Detection

Apache ActiveMQ 5.x+ JMS 1.x Yes Yes Yes

Apache ActiveMQ 5.x+ STOMP No - Yes

Apache ActiveMQ 5.8.x+ AMQP 1.0 No - Yes Example Message Queue Backend Configuration

Apache Axis 1.x, 2.x JAX-WS Yes Yes - Default exclude rules exist for Apache Axis, Axis2, and Axis Admin Servlets. See aslo "Web Service Entry Points" on Java Backend Detection.

Apache Apache CXF 2.1 JAX-WS Yes Yes - To enable correlation, set node property enable-soap-hea der-correlation=true.

Apache Synapse 2.1 HTTP Yes Yes - To enable correlation, set node property enable-soap-hea der-correlation=true.

Fiorano Fiorano MQ - - - -

IBM IBM Web Application Server 6.1+, 7.x Embedded - Yes - Example Message Queue Backend Configuration (WAS) JMS

IBM IBM MQ (formerly IBM 6.x+ JMS Yes Yes - Example Message Queue Backend Configuration WebSphere MQ)

Copyright © AppDynamics 2012-2017 Page 12 Open Open MQ - - - - - Source

Mulesoft Mule ESB 3.4 HTTP Yes Yes - Mule ESB Startup Settings

Mule ESB Support

Oracle Java Message Service 2.0 JMS Correlation of the Yes listener is disabled by default

Oracle Oracle AQ - JMS - Yes -

Oracle / WebLogic 9.x+ JMS 1.1 Yes Yes Yes Oracle WebLogic Startup Settings BEA

Progress SonicMQ - - - - -

Pivotal RabbitMQ - HTTP - Yes - See "RabbitMQ Backends" on Java Backend Detection

Rabbit RabbitMQ Spring Client - - Yes Yes - See "RabbitMQ Backends" on Java Backend Detection

Red Hat HornetQ (formerly JBoss - Yes Messaging and JBoss MQ)

Red Hat JBoss A-MQ 4.x+ - - - Yes

Spring Spring Integration 2.2.0 JMS Yes Yes Yes Spring Integration Support

See also "Java Message Service Backends" on Java Backend Detection

WSO2 ESB 4.7.0 - Yes Yes -

JDBC Drivers and Database Servers Support

The Java Agent supports the following JDBC drivers and database server environments. AppDynamics can follow transactions using these drivers to the designated database.

JDBC Vendor Driver Version Driver Type Database Database Server Version

Apache 10.9.1.0 Embedded or client Derby -

Apache - - Cassandra -

Progress DataDirect data connectivity for ODBC and JBDC driver access, data - - integration, and SaaS and cloud computing solutions

IBM JDBC 3.0 version 3.57.82 or DB2 Universal JDBC driver DB2 9.x JDBC 4.0 version 4.7.85

IBM JDBC 3.0 version 3.66.46 or DB2 Universal JDBC driver DB2 10.1 JDBC 4.0 version 4.16.53

IBM - Type IV Informix -

Microsoft 4 Type II MS SQL 2012* Server

Oracle MySQL, 5.x Type II, Type IV MySQL 5.x MySQL Community

Open Source Connector/J 5.1.27 Type IV MySQL 5.x

Open Source - Type IV Postgres 8.x, 9.x

Oracle 9.x Type II, Type IV Oracle 8i+ Database

Sybase jConnect Type IV Sybase -

Teradata Teradata -

Notes:

Type II is a C or OCI driver

Copyright © AppDynamics 2012-2017 Page 13 Type IV is a thin database client and is a pure Java driver

Business Transaction Error Detection

The Java Agent supports the following logging frameworks for business transaction error detection:

Apache Log4j and Log4j 2 java.util.logging Simple Logging Facade for Java (SLF4J) Support for the following method has been added: public void error(String format, Object.... argArray) Logback

To instrument other types of loggers, see Configure Error Detection.

NoSQL/Data Grids/Cache Servers Support

The Java Agent supports these NoSQL, data grids and cache server environments. Some require additional configuration. Click the link on the database, data grid or cache name in the following support matrix for information about additional configuration required or related configuration topics.

Vendor Database/Data Version Correlation/Entry Points JMX Configuration Notes Grid/Cache

Amazon DynamoDB - Exit Points - See "" on Java Backend Detection.

Amazon Simple Storage - - - "Amazon Simple Storage Service Backends" on Java Backend Service (S3) Detection.

Apache Casandra 1.x, 2.x Correlation for Thrift Yes "Cassandra Backends" on Java Backend Detection. drivers only For Cassandra server-side support, see Apache DataStax drivers Cassandra Startup Settings Thrift drivers

Apache Lucene - Apache Solr 1.4.1 Entry Points Yes Solr Startup Settings

JBoss Cache TreeCache - - - JBoss Startup Settings

JBoss Infinispan 5.3.0+ Correlation - -

Red Hat JBoss DataGrid - - - JBoss Startup Settings

JBoss Cache - - - TreeCache

JBoss Infinispan 5.3.0+ Correlation -

Terracotta EhCache - - - EhCache Exit Points

Open - - - Memcached Exit Points Source

Open MongoDB 3.1 - - See "MongoDB Backends" on Java Backend Detection Source

Oracle Coherence 3.7.1 Custom-Exit Yes Coherence Startup Settings

Java Frameworks Support

The Java Agent supports these Java frameworks. Some require additional configuration. Click the link on the Java framework name in the following support matrix for information about additional configuration required or related configuration topics.

Vendor Framework Version SOA protocol Auto Naming Entry Exit Detection (WebServices) Points Points

Adobe BlazeDS - HTTP and JMS - Yes - adaptor

Adobe ColdFusion 8.x, 9.x - - Yes - Configuration required for transaction discovery

Copyright © AppDynamics 2012-2017 Page 14 with Thrift fr - - - Yes Yes Entry and Exit points are detected amework

Apache Struts 1.x, 2.x - - Yes Struts Actions are detected as entry points, struts invocation handler is instrumented

Apache Tapestry 5 - - Yes - Not by default

Wicket - - No Yes - Not by default

Apple WebObjects 5.4.3 HTTP Yes Yes - Yes

CometD 2.6 HTTP Yes Yes - -

Eclipse RCP (Rich Client ------Platform)

Google 2.5.1 HTTP Yes Yes - - (GWT)

JBoss JBossWS Native Stack 4.x, 5.x Native Stack - - - -

Open Direct Web Remoting ------Source (DWR)

Open Enterprise Java Beans ( 2.x, 3.x - - Yes - - Source EJB)

Open Grails - - - Yes - Not by default Source

Open Hibernate JMS Listener 1.x - - - - - Source s

Open Java Abstract ------Source Windowing Toolkit (AWT)

Open Java Server Faces (JSF 1.x, 2.x - Yes Yes - - Source )

Open Java Server Pages 2.x - Yes - - Yes Source

Open Java Servlet API 2.x - - - - - Source

Open Jersey 1.x, 2.x REST, JAX-RS Yes Yes No Not by default Source

Open WebSocket 1.0 (Java EE - Yes, Yes, Yes Detection is automatic Source 7, JSR-356) BT Naming not correlation configurable not supported

Oracle Coherence with Spring 2.x, 3.x - - - - - Beans

Oracle Swing (GUI) ------

Oracle WebCenter 10.0.2,10.3.0 - - - - -

Open JRuby HTTP - - - Yes - Not by default Source

Spring Spring MVC - - - Yes - Not by default

Java Frameworks Configuration

For the Java framework environments that require additional configuration, this section provides some information and links to topics that help you configure the environment. Environments in the Java Frameworks Support table that require additional configuration, link to the configuration table below.

Java Framework Configuration Notes

Adobe BlazeDS Example Message Queue Backend Configuration

Adobe ColdFusion Configuration is required for transaction discovery

Java Business Transaction Detection Servlet Entry Points

Apache Cassandra with Thrift framework No additional configuration is required.

Copyright © AppDynamics 2012-2017 Page 15 Apache Struts Struts Entry Points

Apache Tapestry Java Business Transaction Detection Servlet Entry Points

Wicket Java Business Transaction Detection Servlet Entry Points

Apple WebObjects Business transaction naming can be configured via getter-chains, see

Getter Chains in Java Configurations Detect Transactions by POJO Method Invoked by a Servlet

CometD See also "HTTP Exit Points" on Java Backend Detection.

Open Source Enterprise Java Beans (EJB) EJB Entry Points

Open Source Hibernate JMS Listeners No additional configuration is required. See also:

Advanced Options in Call Graphs

Open Source Java Server Faces (JSF) Java Business Transaction Detection and Servlet Entry Points

Open Source Java Server Pages Servlet Entry Points

Open Source Jersey JAX-RS Support and node properties: rest-num-segments rest-transaction rest-uri-segment-scheme See App Agent Node Properties Reference for information on the properties.

Open Source JRuby HTTP Java Business Transaction Detection Servlet Entry Points

Open Source WebSocket Node property: -entry-calls-enabled

Spring MVC Java Business Transaction Detection Servlet Entry Points

RPC/Web Services API/HTTP Client Support

The Java Agent supports these RPC, web service, and API framework types.

Vendor RPC/Web Services Version SOA Protocol- Auto Correlation/Entry Exit Configurable BT Detection API Framework WebServices Naming Points Points Naming Properties

Apache Apache CXF 2.1 JAX-WS Yes Yes Yes Yes Yes

Apache Apache HTTP Client - HTTPClient (now in Apache HTTP Yes Yes (correlation Yes - Yes Components) only)

Apache Netflix-ribbon HTTP 2.1.0 HTTP Client Yes Yes (correlation) Yes - Yes Client Entry - NA

Apache Apache Thrift - - Yes Yes Yes Yes Yes

IBM WebSphere 6.x JAX-RPC - - - - -

IBM WebSphere 7.x, 8.x JAX-RPC - - - - -

IBM Websphere 7.x, 8.x IIOP - - - - -

Red Hat JBoss A-MQ 4.x+ RMI Yes Yes Yes Yes Yes

Open java.net.Http - HTTP Yes - Yes Yes Yes Source

Open HTTPClient 0.3-3 Oracle SOA (and potentially others - Correlation: Yes; Yes - Yes Source that embed this library) Entry: No

Oracle GlassFish Metro - JAX-WS - - - - -

Copyright © AppDynamics 2012-2017 Page 16 Oracle GlassFish Metro with - JAX-WS - Yes - - Not by Grails Default

Oracle Oracle Application ORMI - no - - - - Server

Oracle WebLogic 10.x T3, IIOP Yes Correlation: Yes; Yes - Yes Entry: No

Oracle WebLogic 9.x, 10.x JAX-RPC - - - - -

Square OkHttp

Square OkHttp - HTTP Yes Correlation: Yes Yes - Synchronous Entry: No only

Sun Sun RMI - IIOP - Not by Default - - -

Sun Sun RMI - JRMP - No Yes host/port Yes

- Web Services - SOAP over HTTP - Yes Yes - -

RPC/Web Services API Framework Configuration

For the RPC and web service API environment that require additional configuration, this section provides some information and links to topics that help you configure the environment. Environments in the RPC/Web Services API Framework Support table that require additional configuration, link to the configuration table below.

RPC/Web Services API Configuration Notes

Apache Commons See "HTTP Backends" on Java Backend Detection.

Apache Thrift Binary Remoting Entry Points for Apache Thrift

IBM WebSphere IBM WebSphere and InfoSphere Startup Settings

Instrument JVMs in a Dynamic Environment. See also Default configuration excludes WebSphere classes.

JBoss JBoss and Wildfly Startup Settings

Open See HTTP Exit Points on Java Backend Detection. Source java.net.Http

Oracle WebLogic Oracle WebLogic Startup Settings

Default configuration excludes WebLogic classes

Web Services Create Match Rules for Web Services

"Web Service Entry Points" on Java Backend Detection

Supported Platform Matrix for the .NET Agent

Supported Runtime Environments

This section lists the environments where the .NET Agent does some automatic discovery after little or no configuration.

OS Versions

Microsoft Windows Server 2003 (32-bit and 64-bit) Microsoft Windows Server 2008 (32-bit and 64-bit) Microsoft Windows Server 2008 R2 Microsoft Windows Server 2012 Microsoft Windows Server 2012 R2 Microsoft Windows 7, 8, 8.1

Copyright © AppDynamics 2012-2017 Page 17 Microsoft .NET Frameworks

Microsoft .NET Framework versions 2.0, 3.0, 3.5, 4.0, 4.5, 4.5.2, 4.6

Runtime Environments

Microsoft IIS versions 6.0, 7.0, 7.5, 8.0, 8.5 Managed Windows Services Managed Standalone Applications Microsoft SharePoint 2010, 2013 as services running inside IIS

Microsoft Azure

Azure App Services* for .NET 4.6 environments in the Azure Portal Web Apps API Apps Container Services Azure Cloud Services Web Roles Worker Roles * App Services requires the .NET Agent 4.2.3 or later.

Unsupported Frameworks

Microsoft .NET versions 1.0, 1.1 Unmanaged native code

Automatically Discovered Business Transactions

The .NET Agent discovers business transactions for the following frameworks by default. The agent enables detection without additional configuration.

Type Custom Configuration Downstream Correlation? Options?

ASP.NET* Yes Yes

ASP.NET MVC 2 Yes Yes ASP.NET MVC 3 ASP.NET MVC 4 ASP.NET MVC 5

.NET Remoting No See Enable Correlation for .NET Remoting.

Windows Communication Foundation (WCF) No Yes

Web Services including SOAP No Yes

Message Queues

Apache ActiveMQ NMS framework and related No Yes MQs

IBM WebSphere MQ No Yes

Microsoft Message Queuing (MSMQ) No Yes

Microsoft Service Bus / Windows Azure Service No Yes Bus

Copyright © AppDynamics 2012-2017 Page 18 NServiceBus over MSMQ or RabbitMQ transport No Yes

RabbitMQ Yes Yes

TIBCO Enterprise Message Service No Yes

TIBCO Rendezvous No Yes

Windows Azure Queue No Yes

* The .NET Agent automatically discovers entry points for ASP.NET web forms with the Async property set to "true" in the Page directive.

Supported Loggers for the .NET Agent

Log4Net NLog System Trace Windows Event Log

If you are using a different logger, see Configure Error Detection.

Remote Service Detection

The .NET Agent automatically detects the following remote service types. The agent enables detection by default. You don't need to perform extra configuration.

Type Custom Configuration Async Detection?* Downstream Correlation? Options?

Directory Services, including LDAP No No N/A

HTTP Yes See Asynchronous Exit Points Yes for .NET.

MongoDB: C# and .NET MongoDB Driver No See Asynchronous Exit Points N/A version 1.10, 2.0 for .NET.

.NET Remoting Yes No See Enable Correlation for .NET Remoting.

WCF Yes See Asynchronous Exit Points Yes for .NET.

WCF Data Services Yes No No

Web Services, inlcuding SOAP Yes See Asynchronous Exit Points Yes for .NET.

Data Integration

Microsoft BizTalk Server 2010, 2013 No Yes See Correlation Over Microsoft BizTalk.

Message Queues

Apache ActiveMQ NMS framework and related Yes No Yes MQs

IBM WebSphere MQ (IBM XMS) Yes No Yes

Microsoft Message Queuing (MSMQ) Yes See MSMQ Backends for .NET. See MSMQ Backends for .NET.

Microsoft Service Bus / Windows Azure No Async exit points only. Yes Service Bus

NServiceBus over MSMQ or RabbitMQ No See NServiceBus Backends for Yes transport .NET.

Copyright © AppDynamics 2012-2017 Page 19 RabbitMQ See RabbitMQ Backends No Yes for .NET.

TIBCO Enterprise Message Service Yes No Yes

TIBCO Rendezvous Yes No Yes

Windows Azure Queue No No No

* The agent discovers asynchronous transactions for the Microsoft .NET 4.5 framework. See Asynchronous Exit Points for .NET. for details.

Supported Windows Azure Remote Services

Type Customizable Configuration? Downstream Correlation?

Azure Blob No No

Azure Queue No No

Microsoft Service Bus No Yes

Data Storage Detection

The .NET Agent automatically detects the following data storage types. The agent enables detection by default. You don't need to perform extra configuration.

Type Customizable Configuration? Async Detection?* AppD for Databases?

ADO.NET (see supported clients below) Yes Yes No

Windows Azure Blob Storage No Yes No

Windows Azure File Storage No Yes No

Windows Azure Table Storage No Yes No

* The agent discovers asynchronous transactions for the Microsoft .NET 4.5 framework. See Asynchronous Exit Points for .NET for details.

Supported ADO.NET Clients

AppDynamics can monitor any ADO.NET client version and type. Clients we've tested include the following:

Database Name Database Version Client Type

Oracle 10, 11, 12 ODP.NET

Oracle 10, 11, 12 Microsoft Provider for Oracle

MySQL 5.x Connector/Net and ADO.NET

Microsoft SQL Server 2005, 2008, 2012 ADO.NET

Microsoft, SQL Server, and Windows are registered trademarks of Microsoft Corporation in the United States and other countries.

Supported Platform Matrix for the PHP Agent

Copyright © AppDynamics 2012-2017 Page 20 PHP Versions

Supported PHP Versions Comment

5.2 Does not detect mysqli backends instantiated with the new keyword. See note below. PHP 5.2 is not supported on OSX.

5.3

5.4

5.5

5.6

PHP 5.2 Note

The PHP Agent is incompatible with PHP 5.2 applications that use the new keyword to instantiate a mysqli backend. For example, AppDynamics will not detect the mysqli backend created by a PHP 5.2 application that uses an expression like this:

// Does not get detected. $db = new mysqli("localhost", "user", "password", "database");

The workaround is to change such expressions to use mysqli_connect():

$db = mysqli_connect("localhost", "user", "password", "database");

PHP ZTS Note

The PHP Agent is incompatible with the mode of PHP called Zend Thread Safety (ZTS).

If you are using ZTS, AppDynamics suggests that you review your dependencies on ZTS to confirm that you actually need it, and if you do not, to switch to non-ZTS mode.

PHP Web Servers

Supported Web Server Version Comment

Apache 2.2 in prefork mode using mod_php

Apache 2.4 in prefork mode using mod_php

Apache 2.2 in worker MPM mode using mod_fastcgi with -fpm or mod_fcgid with php-cgi

Apache 2.4 2.4 in worker MPM mode using mod_fastcgi with php-fpm or mod_fcgid with php-cgi

Any Web Server compatible with php-fpm

Copyright © AppDynamics 2012-2017 Page 21 Operating Systems

Supported Operating System Version

RHEL/CentOS 5.8+

Ubuntu 10+

Debian 6

OSX Mavericks

For information on how to install the PHP Agent to work with SELinux, see Special Considerations for PHP with SELinux.

Architecture

Supported Architecture

32-bit

64-bit

PHP Frameworks and Protocols

Framework/Protocol Version Entry Point Type

Drupal 7

WordPress 3.4+ Wordpress

Zend 1 & 2 PHP MVC

CodeIgniter 2.x PHP MVC

FuelPHP 1.5x & 1.6x PHP MVC

Magento 1.5, 1.6 & 1.7 PHP MVC

Copyright © AppDynamics 2012-2017 Page 22 1 & 2 PHP MVC

CakePHP 2.x PHP MVC

HTTP PHP Web

CLI PHP CLI

If your PHP framework is not listed here, the agent detects your entry points as PHP Web and names the business transactions based on the first two segments of the URI (the default naming convention for PHP Web transactions). So it is still possible to monitor applications on "unsupported" frameworks. You can modify the naming convention used for PHP Web Entry points. See Configure PHP Web Transaction Naming.

Transaction Naming

Framework/Environment Default Transaction Naming

Drupal page callback name

Wordpress template name

PHP MVC Frameworks controller:action

PHP Modular MVC Frameworks module:controller:action

PHP Web URI

PHP Web Service service name.operation name

PHP CLI last two segments of the script's directory path plus the name of the script

Virtual host prefixing is available for all supported entry point types except PHP CLI.

PaaS Providers

PaaS Provider Buildpack

Pivotal Cloud Foundry https://github.com/Appdynamics/php-buildpack See http://docs.pivotal.io/appdynamics/index.html for information about integration with PCF.

Exit Points

Supported HTTP Exit Points

curl/curl-multi

drupal_http_request()

fopen(), file_get_contents()

Zend_HTTP_Client::request()

Supported Database Exit Points

MySQL old native driver

MySQLi Extension *

OCI8

PDO

Copyright © AppDynamics 2012-2017 Page 23 PostgreSQL accessed via PDO and pgsql extensions

* mysqli_multi_query is not supported.

Supported Cache Exit Points Version

Memcache

Memcached

Predis 0.8.5

Predis is supported on PHP versions 5.3 and higher.

Although Predis is a full PHP client library, the PHP Agent supports Predis as an exit point only, not as an entry point.

Supported Web Service Exit Points

PHP SOAPClient

NuSOAP 0.9.5

Supported Message Queue Exit Points

RabbitMQ

RabbitMQ support requires the amqp extension.

Opcode Cache Compatibility

Alternative PHP Cache (APC)

Correlation with AppDynamics for Databases

AppDynamics for Databases version 2.7.4 or higher is required if you want to correlate AppDynamics for Databases with the PHP Agent.

Supported Platform Matrix for the Node.js Agent

Node.js Versions

As of Node.js Agent version 4.2.4, the Node.js Agent supports all major and minor versions of Node.js v0, v4, v5, and v6. The agent does not need to be updated for each new minor version update of Node.js. As of 4.2.13, the Node.js agent supports Node.js v7 for Linux and Windows.

For Node.js version support by Node.js Agent versions prior to 4.2.4, expand the following links:

4.2.0

v0.8.x Support v0.10.x Support v0.12.x Support v4.x Support v5.x Support

Copyright © AppDynamics 2012-2017 Page 24 v0.8.0 v0.10.0 v0.12.0 v4.0.0 v5.0.0 v0.8.1 v0.10.1 v0.12.1 v4.1.0 v5.1.0 v0.8.2 v0.10.2 v0.12.2 v4.1.1 v5.1.1 v0.8.3 v0.10.3 v0.12.3 v4.1.2 v5.2.0 v0.8.4 v0.10.4 v0.12.4 v4.2.0 v5.3.0 v0.8.5 v0.10.5 v0.12.5 v4.2.1 v0.8.6 v0.10.6 v0.12.6 v4.2.2 v0.8.7 v0.10.7 v0.12.7 v4.2.3 v0.8.8 v0.10.8 v0.12.8 v4.2.4 v0.8.9 v0.10.9 v0.12.9 v0.8.10 v0.10.10 v0.8.11 v0.10.11 v0.8.12 v0.10.12 v0.8.13 v0.10.13 v0.8.14 v0.10.14 v0.8.15 v0.10.15 v0.8.16 v0.10.16 v0.8.17 v0.10.17 v0.8.18 v0.10.18 v0.8.19 v0.10.19 v0.8.20 v0.10.20 v0.8.21 v0.10.21 v0.8.22 v0.10.22 v0.8.23 v0.10.23 v0.8.24 v0.10.24 v0.8.25 v0.10.25 v0.8.26 v0.10.26 v0.8.27 v0.10.27 v0.8.28 v0.10.28 v0.10.29 v0.10.30 v0.10.31 v0.10.32 v0.10.33 v0.10.34 v0.10.35 v0.10.36 v0.10.37 v0.10.38 v0.10.39 v0.10.40 v0.10.41

4.2.1

v0.8.x Support v0.10.x Support v0.12.x Support v4.x Support v5.x Support

Copyright © AppDynamics 2012-2017 Page 25 v0.8.0 v0.10.0 v0.12.0 v4.0.0 v5.0.0 v0.8.1 v0.10.1 v0.12.1 v4.1.0 v5.1.0 v0.8.2 v0.10.2 v0.12.2 v4.1.1 v5.1.1 v0.8.3 v0.10.3 v0.12.3 v4.1.2 v5.2.0 v0.8.4 v0.10.4 v0.12.4 v4.2.0 v5.3.0 v0.8.5 v0.10.5 v0.12.5 v4.2.1 v5.4.0 v0.8.6 v0.10.6 v0.12.6 v4.2.2 v5.4.1 v0.8.7 v0.10.7 v0.12.7 v4.2.3 v5.5.0 v0.8.8 v0.10.8 v0.12.8 v4.2.4 v5.6.0 v0.8.9 v0.10.9 v0.12.9 v4.2.5 v5.7.0 v0.8.10 v0.10.10 v0.12.10 v4.2.6 v5.7.1 v0.8.11 v0.10.11 v0.12.11 v4.3.0 v0.8.12 v0.10.12 v4.3.1 v0.8.13 v0.10.13 v4.3.2 v0.8.14 v0.10.14 v0.8.15 v0.10.15 v0.8.16 v0.10.16 v0.8.17 v0.10.17 v0.8.18 v0.10.18 v0.8.19 v0.10.19 v0.8.20 v0.10.20 v0.8.21 v0.10.21 v0.8.22 v0.10.22 v0.8.23 v0.10.23 v0.8.24 v0.10.24 v0.8.25 v0.10.25 v0.8.26 v0.10.26 v0.8.27 v0.10.27 v0.8.28 v0.10.28 v0.10.29 v0.10.30 v0.10.31 v0.10.32 v0.10.33 v0.10.34 v0.10.35 v0.10.36 v0.10.37 v0.10.38 v0.10.39 v0.10.40 v0.10.41 v0.10.42

4.2.2

v0.8.x Support v0.10.x Support v0.12.x Support v4.x Support v5.x Support

Copyright © AppDynamics 2012-2017 Page 26 v0.8.0 v0.10.0 v0.12.0 v4.0.0 v5.0.0 v0.8.1 v0.10.1 v0.12.1 v4.1.0 v5.1.0 v0.8.2 v0.10.2 v0.12.2 v4.1.1 v5.1.1 v0.8.3 v0.10.3 v0.12.3 v4.1.2 v5.2.0 v0.8.4 v0.10.4 v0.12.4 v4.2.0 v5.3.0 v0.8.5 v0.10.5 v0.12.5 v4.2.1 v5.4.0 v0.8.6 v0.10.6 v0.12.6 v4.2.2 v5.4.1 v0.8.7 v0.10.7 v0.12.7 v4.2.3 v5.5.0 v0.8.8 v0.10.8 v0.12.8 v4.2.4 v5.6.0 v0.8.9 v0.10.9 v0.12.9 v4.2.5 v5.7.0 v0.8.10 v0.10.10 v0.12.10 v4.2.6 v5.7.1 v0.8.11 v0.10.11 v0.12.11 v4.3.0 v5.8.0 v0.8.12 v0.10.12 v0.12.12 v4.3.1 v5.9.0 v0.8.13 v0.10.13 v0.12.13 v4.3.2 v5.9.1 v0.8.14 v0.10.14 v4.4.0 v5.10.0 v0.8.15 v0.10.15 v4.4.1 v0.8.16 v0.10.16 v4.4.2 v0.8.17 v0.10.17 v4.4.3 v0.8.18 v0.10.18 v4.4.4 v0.8.19 v0.10.19 v0.8.20 v0.10.20 v0.8.21 v0.10.21 v0.8.22 v0.10.22 v0.8.23 v0.10.23 v0.8.24 v0.10.24 v0.8.25 v0.10.25 v0.8.26 v0.10.26 v0.8.27 v0.10.27 v0.8.28 v0.10.28 v0.10.29 v0.10.30 v0.10.31 v0.10.32 v0.10.33 v0.10.34 v0.10.35 v0.10.36 v0.10.37 v0.10.38 v0.10.39 v0.10.40 v0.10.41 v0.10.42 v0.10.43 v0.10.44

4.2.3

v0.8.x Support v0.10.x Support v0.12.x Support v4.x Support v5.x Support

Copyright © AppDynamics 2012-2017 Page 27 v0.8.0 v0.10.0 v0.12.0 v4.0.0 v5.0.0 v0.8.1 v0.10.1 v0.12.1 v4.1.0 v5.1.0 v0.8.2 v0.10.2 v0.12.2 v4.1.1 v5.1.1 v0.8.3 v0.10.3 v0.12.3 v4.1.2 v5.2.0 v0.8.4 v0.10.4 v0.12.4 v4.2.0 v5.3.0 v0.8.5 v0.10.5 v0.12.5 v4.2.1 v5.4.0 v0.8.6 v0.10.6 v0.12.6 v4.2.2 v5.4.1 v0.8.7 v0.10.7 v0.12.7 v4.2.3 v5.5.0 v0.8.8 v0.10.8 v0.12.8 v4.2.4 v5.6.0 v0.8.9 v0.10.9 v0.12.9 v4.2.5 v5.7.0 v0.8.10 v0.10.10 v0.12.10 v4.2.6 v5.7.1 v0.8.11 v0.10.11 v0.12.11 v4.3.0 v5.8.0 v0.8.12 v0.10.12 v0.12.12 v4.3.1 v5.9.0 v0.8.13 v0.10.13 v0.12.13 v4.3.2 v5.9.1 v0.8.14 v0.10.14 v0.12.14 v4.4.0 v5.10.0 v0.8.15 v0.10.15 v4.4.1 v5.10.1 v0.8.16 v0.10.16 v4.4.2 v5.11.0 v0.8.17 v0.10.17 v4.4.3 v5.11.1 v0.8.18 v0.10.18 v4.4.4 v0.8.19 v0.10.19 v4.4.5 v0.8.20 v0.10.20 v0.8.21 v0.10.21 v0.8.22 v0.10.22 v0.8.23 v0.10.23 v0.8.24 v0.10.24 v0.8.25 v0.10.25 v0.8.26 v0.10.26 v0.8.27 v0.10.27 v0.8.28 v0.10.28 v0.10.29 v0.10.30 v0.10.31 v0.10.32 v0.10.33 v0.10.34 v0.10.35 v0.10.36 v0.10.37 v0.10.38 v0.10.39 v0.10.40 v0.10.41 v0.10.42 v0.10.43 v0.10.44 v0.10.45

Operating Systems

Supported Operating System

Linux 32-bit

Linux 64-bit

Mac OSX v10.9.2

Microsoft Windows 2008R2/Windows 2012 for 64 bit applications for Node.js versions 0.12.0 and higher

Transaction Naming

Entry Type Default Transaction Naming

Node.js Web URI

HTTP Exit Points

Supported HTTP Exit Points

Copyright © AppDynamics 2012-2017 Page 28 Node.js HTTP client library

See http://nodejs.org/api/http.html for information about the Node.js HTTP client library.

Database Exit Points

Supported Database Exit Points

MongoDB

MySQL

PGSQL

Riak

Riak backends are automatically detected, but they are displayed as HTTP backends in the flowmaps.

Cache Exit Points

Supported Cache Exit Points

Memcached

Redis

Supported Platform Matrix for the Python Agent

Python Versions

Supported Python Versions

CPython 2.6, 2.7, and 3.4

Operating Systems

Supported Operating System

Any Linux distribution based on glibc 2.5+

Mac OS X 10.8+

Python Frameworks and Protocols

Framework/Protocol Version Entry Point Type

WSGI 1.0 Python Web

New in 4.2.13, 3.2 - 4.2.2 Python Web

AppDynamics has tested the Python Agent on , , and CherryPy.

You can configure the agent to instrument any WSGI-based application or framework as Python Web, including (but not limited to) those listed below.

At present, the Python agent fully supports exception detection in Django, Flask, and CherryPy frameworks. Other WSGI frameworks

Copyright © AppDynamics 2012-2017 Page 29 and custom WSGI applications may install exception handlers that effectively hide some exceptions from the agent. In such cases, the agent will only detect exceptions during exit calls, uncaught exceptions which are propagated to the WSGI server, and exceptions reported via the custom business transaction API.

WSGI-Based Frameworks

Bottle

CherryPy

Django

Flask

PasteDeploy

Pyramid

Zope 3

Transaction Naming

Framework/Environment Default Transaction Naming

WSGI first two segments of the URI

Database Exit Points

Supported Database Exit Points Version

cx_Oracle, new in 4.2.15 5.1.x

MongoDB 3.1+

MySQL-Python

mysqlclient, new in 4.2.14

MySQL Connector/Python

Psycopg 2

PyMySql, new in 4.2.1

TorMySql, new in 4.2.11

HTTP Exit Points

Supported HTTP Exit Calls

httplib*

httplib2

requests

urllib

urllib2

urllib3

*The agent detects calls to any external library built on top of httplib. Therefore, backend calls to such services, such as boto, dropbox,

Copyright © AppDynamics 2012-2017 Page 30 python-, etc. are detected and displayed as HTTP exit calls.

Cache Exit Points

Supported Cache Exit Points

Memcache

Redis-py

Supported Platform Matrix for the Apache Server Agent

Apache Web Servers

Supported Apache Web Server Version

Apache HTTP Server 2.2.x (32-bit and 64-bit) Apache HTTP Server 2.4.x (32-bit and 64-bit) IBM HTTP Server 7.0 + Oracle HTTP Server 11g+

Operating Systems

Supported Operation System

Ubuntu 11+ (32-bit and 64-bit) Cent OS 5+ (32-bit and 64-bit) Red Hat 5+ (32-bit and 64-bit)

Automatically Discovered Business Transactions

The Apache Agent automatically discovers the following business transactions:

Type Custom Configuration Options Downstream Correlation

Web (HTTP) Yes Yes

By default the agent excludes requests for the following static file types: bmp cab class conf css doc gif ico jar jpeg jpg js mov mp3 mp4

Copyright © AppDynamics 2012-2017 Page 31 png pps properties swf tif txt zip

Remote Service Detection

Apache Modules

The Apache Agent automatically detects loaded Apache modules as remote services. The agent excludes a list of common modules from detection. Show the list of excluded modules... core.c

http_core.c

mod_access_compat.c

mod_actions.c

mod_alias.c

mod_allowmethods.c

mod_appdynamics.cpp

mod_auth_basic.c

mod_auth_digest.c

mod_authn_alias.c

mod_authn_anon.c

mod_authn_core.c

mod_authn_dbd.c

mod_authn_dbm.c

mod_authn_default.c

mod_authn_file.c

mod_authn_socache.c

mod_authnz_ldap.c

mod_authz_core.c

mod_authz_dbd.c

mod_authz_dbm.c

mod_authz_default.c

mod_authz_groupfile.c

mod_authz_host.c

mod_authz_owner.c

mod_authz_user.c

mod_autoindex.c

mod_cache.c

Copyright © AppDynamics 2012-2017 Page 32 mod_cache_disk.c mod_cgi.c mod_data.c mod_dav.c (included as of 4.2.3) mod_dav_fs.c (included as of 4.2.3) mod_dav_lock.c (included as of 4.2.3) mod_dbd.c mod_deflate.c mod_dir.c mod_disk_cache.c mod_dumpio.c mod_echo.c mod_env.c mod_expires.c mod_ext_filter.c mod_file_cache.c mod_filter.c mod_headers.c mod_include.c mod_info.c mod_lbmethod_bybusyness.c mod_lbmethod_byrequests.c mod_lbmethod_bytraffic.c mod_lbmethod_heartbeat.c mod_log_config.c mod_logio.c mod_lua.c mod_mem_cache.c mod_mime.c mod_mime_magic.c mod_negotiation.c mod_perl.c mod_python.c mod_remoteip.c mod_reqtimeout.c mod_rewrite.c mod_setenvif.c mod_slotmem_plain.c

Copyright © AppDynamics 2012-2017 Page 33 mod_slotmem_shm.c

mod_so.c

mod_socache_dbm.c

mod_socache_memcache.c

mod_socache_shmcb.c

mod_speling.c

mod_ssl.c

mod_status.c

mod_substitute.c

mod_suexec.c

mod_systemd.c

mod_unique_id.c

mod_unixd.c

mod_userdir.c

mod_usertrack.c

mod_version.c

mod_vhost_alias.c

prefork.c

util_ldap.c

For End User Monitoring, the Apache Agent does not support: automatic injection of the Javascript adrum header and footer to instrument web pages. server side business transaction correlation with Mobile Real User Monitoring.

Supported Platform Matrix for the C and C++ Agent

Operating Systems

Supported Operation System

Linux 64-bit

Linux 32-bit

Microsoft Windows 2008R2/Windows 2012 for 32/64 bit applications

Supported Platforms for the Standalone Machine Agent

Supported platforms and environments for the Standalone Machine Agent are dependent on the specific metric data collection extension being used, which is dependent on the machine's OS. See Machine Agent Metric Collection for details.

JRE Requirements

The Standalone Machine Agent requires a Java Virtual Machine. JRE 1.7 or higher is required. Downloads for many of the supported OSs include Oracle JRE 1.8. The Standalone Machine Agent should work with most, if not all, of the JVMs supported by the Java Agent

Copyright © AppDynamics 2012-2017 Page 34 that are JRE 1.7 or higher. However, it is extensively tested only for the Oracle JDK and OpenJDK.

Note the following about the available Machine Agent downloads:

A Machine Agent ZIP without the JRE is available for AIX, HP-UX, and other machines that support JRE 1.7. The downloads that the JRE only run on x86 machines. To run the Machine Agent on other machine architectures, use the Machine Agent ZIP without the JRE. If you are using a 64-bit Operating System, use only a 64-bit Java Runtime Environment (JRE).

Note on handling large metric values. A 64-bit long has a maximum and minimum value of 9223372036854775807 and -9223372036854775808 respectively, where as a 32-bit long has a maximum and minimum value of 2147483647 and -2147483648, respectively. To handle large values for metrics, run the Machine Agent using a 64-bit JDK.

Supported Environments

Tested Platforms

OS Architecture Versions Kernel Version

CentOS x86 (32-bit) 5.11, 6.6 2.6 and higher

x64 (64-bit)

Debian x86 (32-bit) 7 2.6 and higher

x64 (64-bit)

Fedora x86 (32-bit) 2.1 2.6 and higher

x64 (64-bit)

OpenSuSE x86 (32-bit) 12.3, 13.2 2.6 and higher

x64 (64-bit)

Ubuntu x86 (32-bit) 12, 14 2.6 and higher

x64 (64-bit)

Windows x86 32-bit Windows Server 2008 SP2 N/A

x64 (64-bit) Windows Server 2012 R2

Windows 8.x

Other Platforms

Other operating systems and versions that are supported by JRE 1.7 should also work, but are not fully tested by AppDynamics.

Oracle JRE 1.7 http://www.oracle.com/technetwork/java/javase/config-417990.html IBM SDK, Java Technology Edition, Version 7 HP JDK/JRE 7.0.x Downloads

The Standalone Machine Agent is not supported on machines based on Power Architecture processors, including PowerPC processors.

Browser Requirements for Sessions

To use Browser RUM sessions, your browser is required to support for the following:

cross-origin resource sharing (CORS) for beacons local storage for multiple-page sessions (single-page/multiple virtual page sessions don't require local storage)

Copyright © AppDynamics 2012-2017 Page 35 Browser RUM sessions are not supported for beacons implemented with GIFs.

Supported Platform Matrix for Mobile RUM

Android Agent

Supported Name Supported Environments Version(s)

Operating Android 2.3.3+ System

Frameworks Ant -

Gradle 1.8, 1.10, 1.12, 2.1

Maven 3.1.1+

HTTP Libraries HttpURLConnection/HttpsURLConnection -

HttpClient -

OkHttp 2.2.0–3.4.x

ch.boye.httpclientandroidlib -

Other HTTP libraries can be added by using the agent SDK. See Use the of the Android - SDK to Customize Your Instrumentation for more information. iOS Agent

Supported Name Supported Environments Version(s)

Operating System iOS 6+

Architecture Apple 32-bit ARM, Apple 64-bit A7 -

Framework XCode 5+ 7+ for Bitcode Enabled

Apple WatchKit watchOS 1 Extension Environments

HTTP Libraries NSURLConnection -

NSURLSession -

Other HTTP libraries can be added by using the agent SDK. See Use the APIs of the iOS - SDK to Customize Your Instrumentation for more information.

Supported Platform as a Service (PaaS) Providers

PaaS Provider Description

Copyright © AppDynamics 2012-2017 Page 36 Pivotal Cloud Application and machine monitoring in PCF environments. AppDynamics can provide built in support for these Foundry (PCF) Cloud Foundry buildpacks:

Java Buildpack 3.4 and higher. (See Using AppDynamics with Java Applications on Pivotal Cloud for a walkthrough of using the Java buildpack.) PHP Buildpack 4.0+. (See Using AppDynamics with PHP Applications on Pivotal Cloud Foundry for a walkthrough of using the PHP buildpack.) For more and updated information and a link to the AppDynamics tile page, see the AppDynamics APM Tile documentation.

Red Hat Openshift 3 Docker images with built-in AppDynamics monitoring support for JBoss EAP 6.4 and WildFly 8.1.

For documentation and download information, see the AppDynamics Java APM Agent page on the Red Hat Customer Portal.

Download AppDynamics Software

On this page:

Download Tips Downloading from a Linux Shell

The AppDynamics Download Center (https://appdynamics.com/download) has the complete set of software components for download.

Be sure to get the version appropriate for your operating system from the AppDynamics Download Center, including correct bitness for your machine (32- or 64-bit).

If you haven't tried AppDynamics Pro yet, sign up for an account on www.appdynamics.com. After signing up, you can start a trial of AppDynamics Pro and download the Controller or get access to a SaaS Controller.

Download Tips

Always copy or transfer the downloaded files in binary mode. If you have downloaded a binary on Windows, and you are moving it to a environment, the transfer program must use binary mode.

For each file you download, verify that the download is complete and that the file is not corrupted. Run a checksum tool and compare the results against the checksum information on the download site.

Downloading from a Linux Shell

To download AppDynamics software from a Linux shell, you can use cURL.

When using cURL, you first need to authenticate to the AppDynamics domain and store the resulting session ID in a file. Then send the request to download the software, passing the session information file as a cookie.

For example, run the following cURL commands:

curl -c cookies.txt - 'username=&password=' https://login.appdynamics.com/sso/login/

curl -L -O -b cookies.txt

You can get the URL for the file you want to download at the AppDynamics Download Center. If you need help retrieving the value, you can supplement your download process by using two APIs to help programmatically download AppDynamics software. After getting your cookie information, use the Version List API, which will return a JSON of all available versions and when

Copyright © AppDynamics 2012-2017 Page 37 they were made publicly available. Then, after finding your version, you can list all available products for that version with the Download File List API. Find the "download_path" value within the response and add it to the curl command.

Platform Installation Quick Start

This section takes you through the steps for installing the complete platform, including the Controller, EUM Server and Events Service. These steps are meant to be an overview of the process, and are most suited for performing a demonstration or trial installation. For complete prerequisites, installation options, and detailed instructions, particularly for a production deployment, see the installation pages linked to from the steps below.

1. Install and configure the Controller as described in Install the Controller. Note that a using a multi-tenant Controller with an on-premises EUM server involves some additional configuration steps to accommodate multiple account licensing in EUM which are not described here. Contact your AppDynamics representative for more information. 2. When installation is complete, verify that the Controller installed and started correctly by attempting to log in to the Controller UI at the URL shown in the final screen of the installer (typically at http://:8090/controller). 3. The Controller is installed with an embedded Events Service instance. Start the embedded Events Service in the Controller as follows:

bin/controller.sh start-events-service

The embedded Events Service is not meant for high volume environments, so you'll install and configure a standalone (remote) Events Service later. 4. Verify that the Event Service is running and healthy at its primary port (9080) and administration port (9081): Verify that a request to http://controller.example:9080/_ping returns "_pong". Verify that a request to http://controller.example:9081/healthcheck?pretty=true returns a response indicating that the services are healthy. The last item in the response should look something like this:

"events-service--store / -singlenode-module" : { "healthy" : true, "message" : "Current [appdynamics-events-service-cluster] cluster state: [GREEN], data nodes: [1], nodes: [1], active shards: [13], relocating shards: [0], initializing shards: [0], unassigned shards: [0], timed out: [false]" }

5. Install and configure the dedicated Events Service: For Linux instructions, see Install the Events Service on Linux. For Windows instructions, see Install the Events Service on Windows 6. Log in to the Controller Administration Console (http://:8090/controller/admin.jsp). 7. In the Controller Settings, find the following setting and set it to the Events Service URL and primary listening port for the dedicated event service:

appdynamics.analytics.server.store.url=http://events-service:90 80

8. If you are installing the Events Service on Windows (or for any reason not using the Platform Administration Application), follow these steps: a. While you're still in the Administration Console, search for the following Controller key setting and copy its value to your clipboard:

Copyright © AppDynamics 2012-2017 Page 38 8.

a.

appdynamics.analytics.server.store.controller.key

b. In the Events Service configuration file at conf/events-service-api-store.properties, paste the Controller key as the value of these properties:

appdynamics.es.key.eum appdynamics.es.key.controller

c. Start the dedicated Events Service using the following command:

bin\events-service.sh start p conf\events-service-api-store.properties

d. Verify that the Events Service is running and accessible on the data and administration ports: i. http://events-service.example:9080/_ping ii. http://events-service.example:9081/healthcheck?pretty=true 9. Again in the Controller Administration Console: a. Configure the Controller to use the standalone Events Service instance for EUM data by setting the eum.es.host pro perty to the hostname and listening port for the Events Service instance or the VIP for the Events Service cluster at a load balancer. b. Find and copy the value of the appdynamics.eum.key Controller property to a temporary file. You’ll need this for the EUM Server installation. 10. Now install the EUM Server. This is done in two phases, as described on Install a Split Host (Production) EUM Server: a. Configure the Controller for the EUM Server: Choose Split Host Production Controller Configuration (install type 2 in the EUM Server installer). Specify localhost as the Controller DB hostname. Restart the Controller and Events Service. b. Install the EUM Server: Split Host Production EUM Server Installation. (Choose install type 3 in the installer.) When prompted by the installer, paste the appdynamics.eum.key value you copied earlier from the Administration Console. If the EUM server does not have access to the Internet, provision the license using the provision-license tool. See the "License Not Installed" section in Install the EUM Server. Start the EUM server:

bin/eum.sh start

Verify EUM is working by ensuring that it responds to requests to the following URL: http://:7001/eumcollector/get-version http://:7001/eumaggregator/get-version c. Activate EUM in the Controller UI: Navigate to your_application > Configuration > Instrumentation > End User Monitoring Click Enable End User Monitoring and then the Save button. d. Verify the license in the Controller License Management page.

Your platform installation is complete. You can now configure the agent-side components to connect to the platform.

Install the Controller

On this page:

Copyright © AppDynamics 2012-2017 Page 39 About the Controller Installer Before Starting Installing with the GUI Installer Installing in Console Mode Installing in Silent Mode (Response File) Installation Configuration Settings Verifying the Controller Installation Next Steps

Related pages:

Controller System Requirements

Watch the video:

Installing the Controller on Linux

This topic describes how to install the AppDynamics Controller on Microsoft Windows and Linux operating systems for an on-premises deployment of the AppDynamics Application Intelligence Platform. If using a SaaS Controller, the Controller is installed and administered for you.

About the Controller Installer

The AppDynamics Controller installer is an executable binary file that you can run in the following modes:

GUI: The installer presents a graphical interface for performing the installation. By default, the installer runs in GUI mode. If the operating system is not set up to display the GUI, it will start in console mode. In order to launch the installer in GUI mode on Linux, it must have X Terminal support. Console: The installer runs interactively in the console window. Silent: The installer runs non-interactively, taking installation options from a response file.

All software components required for Controller installation are bundled with the installer, so you do not normally need to install additional software to install the Controller.

During installation you configure several accounts for the Controller, including the database account, a root user in the Controller, and an administrator in the Controller.

Usernames and passwords should not include the & or ! characters. If a user account needs to access the Controller REST API, additional limitations on the use of special characters in usernames apply. See Users and Groups for more information.

Before Starting

Run the installer as a user that has write permissions to the destination directory. Note that if installing other components of the platform, such as the EUM Server or Analytics Processor, you should install all components as the same user or as a user with equivalent permissions. During installation and setup, the installer tries to start the Controller. This procedure can take some time. If startup exceeds the 30 minute default timeout, the installer exits but leaves changes on disk, allowing you to troubleshoot the issue. When finished troubleshooting, you will need to replace the installation directory with the backup directory, apply the troubleshooting remediation, and restart the installation. Optionally, you can extend the timeout by passing the ad-timeout-in-min command line parameter with the new value in minutes to the installer. The installer writes temporary files to the system temporary directory, typically /tmp on Linux or c:\tmp on Windows. The installer requires 1024 MB of free temp space. It checks for available space before proceeding, and quits with an "out of space" error if not. On Windows, in case of this error, you can set the temporary directory environment variable to a directory with sufficient space for the duration of the installation process. You can restore the setting to the original temp directory when installation is complete. On Linux, you can pass an alternate directory for the installer to use as a temporary directory as an argument to the installer. Use the format -Djava.io.tmpdir=. A time synchronization service, such as the Network Time Protocol daemon (ntpd), should be enabled on the Controller target machine. Ensure that no MySQL processes are running when you install the Controller.

Also note the following platform-specific requirements for Linux and Windows.

Copyright © AppDynamics 2012-2017 Page 40 Linux notes

The Controller requires the libaio library to be present on Linux machines. See information on installing libaio on Linux in Config ure Linux for the Controller. Make sure the file descriptor limit is set to at least 65535. See information on limiting file descriptors on Linux in Configure Linux for the Controller. The user account that performs the installation requires read/write/execute permissions on the directory where you install the Controller and write permission on the /etc/.java/.systemprefs directory . If you are installing other AppDynamics Platform server components, such as the EUM Server or Application Analytics Processor, on the same machine, it is recommended that you perform the installation as the same user or a user with the same permissions on the target machine. On Linux systems, port numbers below 1024 may be considered privileged ports that require root access to open. The default Controller listen ports are not configured for numbers under 1024, but if you intend to set them to a number below 1024 (such as 80 for the primary HTTP port), you need to run the installer as the root user.

For more information about the Linux notes, see Configure Linux for the Controller

Windows notes

Verify that you have administrative privileges on the Windows machine to launch the Controller installer. Components of the .NET Framework 3.5 are required to allow the Controller to be installed as a Windows service on the target machine. The installer checks your system and indicates if .NET 3.5 is not found. Follow the instructions on the installer to get the required components. The Controller is automatically installed as a Windows service. Windows 7 or Server 2008 R2 operating systems must have the hotfix described in http://support.microsoft.com/kb/2549760. This hotfix ensures that the Windows registry modifications made by the installer to extend the default service timeouts work as expected. The installer checks for the presence of the hotfix and warns you if it is not found. After installation, you will need to configure virus scanning, Windows defender, and the Windows indexing service to ignore the AppDynamics database directory, as described in Configure Windows for the Controller. Never stop the Controller or its individual services (the AppDynamics Application service or AppDynamics Database services) from the task manager. To stop and restart the AppDynamics processes, open an elevated command prompt (run as administrator) and run the scripts for starting and stopping the service, as documented in Start or Stop the Controller.

Amazon AWS and notes

When you install the controller, use the fully qualified host name for the application server that users and agents will use to access the controller. Verify that the fully qualified host name is in the /etc/hosts file. The following example shows an entry in /etc/hosts with the IP 21.43.65.987, the fully qualified host name application1.mycompany.com, and the app1:

21.43.65.987 application1.mycompany.com app1

For AWS, provision an Elastic Network Interface (ENI) for each controller host and link the license to the ENI. For more information about ENI, see the AWS documentation at the following link: https://docs.aws.amazon.com/AWSEC2/latest/UserG uide/using-eni.html

Installing with the GUI Installer

On Linux:

1. At a command line, navigate to the directory to which you downloaded the installer 2. Assign execute permissions to the downloaded installer script. For example:

chmod 775 controller_64bit_linux.sh

Copyright © AppDynamics 2012-2017 Page 41 2.

3. Run the installer script. For example:

./controller_64bit_linux.sh

On Windows:

1. Open an elevated command prompt (run as administrator). 2. Run the downloaded installer binary you downloaded. For example:

controller_32bit_windows.exe

Follow the instructions in the wizard to complete the installation. See installation options for additional information on the various screens and fields in the installer.

Installing in Console Mode

In console mode, the installer presents the installation configuration options in its command line interface. Functionally, this installation mode is equivalent to using the GUI installer or the silent mode installer.

To start the installer in console, use the -c switch when starting the installer. For example, on Linux, enter:

./controller_64bit_linux.sh -c

Follow the on-screen prompts to complete the installation. See installation options for more information on the various screens and fields in the installer.

Installing in Silent Mode (Response File)

To start the installer silent mode, use the -q option and pass a response file as the value of the varfile argument. The response file shou ld contain the installation options for your instance.

Before starting, create and configure the response file. The following provide example response files for each operating system.

Linux response file: Click here to expand...

Copyright © AppDynamics 2012-2017 Page 42 controllerConfig=demo iiopPort=3700 serverPort=8090 serverHostName=demoserver #Specify the controller hostname instead of demoserver haControllerType=notapplicable controllerTenancyMode=single adminPort=4848 sys.languageId=en jmsPort=7676 sys.installationDir=/home/appduser/AppDynamics/Controller mysqlRootUserPassword=DRvYYv9eq6 databasePort=3388 userName=admin password=pa55word rePassword=pa55word sslPort=8181 realDataDir=/home/appduser/AppDynamics/Controller/db/data elasticSearchDataDir=/home/appduser/AppDynamics/Controller/ev ents_service/analytics-processor disableEULA=true rootUserPassword=pa55word2 rootUserRePassword=pa55word2 reportingServiceHTTPPort=8020 reportingServiceHTTPSPort=8021 elasticSearchPort=9200

Be sure to set the values for the properties in the file, such as the controllerConfig, hostname, passwords, for your environment. Windows response file: Click here to expand...

Copyright © AppDynamics 2012-2017 Page 43 controllerConfig=demo iiopPort=3700 serverPort=8090 serverHostName=demoserver #Specify the controller hostname instead of demoserver haControllerType=notapplicable controllerTenancyMode=single adminPort=4848 sys.languageId=en jmsPort=7676 sys.installationDir=C\:\\AppDynamics\\Controller mysqlRootUserPassword=DRvYYv9eq6 databasePort=3388 userName=admin password=pa55word rePassword=pa55word sslPort=8181 realDataDir=C\:\\AppDynamics\\Controller\\db\\data elasticSearchDataDir=C\:\\AppDynamics\\Controller\\events_ser vice\\analytics-processor disableEULA=true rootUserPassword=pa55word2 rootUserRePassword=pa55word2 reportingServiceHTTPPort=8020 reportingServiceHTTPSPort=8021 elasticSearchPort=9200

Be sure to set the values for the properties in the file, such as the controllerConfig, hostname, passwords, for your environment.

When ready, start the installer with the -q switch and passing the response file as the -varfile argument.

On Linux, for example, the command would be similar to:

./controller_64bit_linux.sh -q -varfile /home/user/response.varfile

On Windows, similarly pass the path to the varfile. Run the command from an elevated command prompt. For example:

controller_64bit_windows.exe -q -varfile C:\temp\response.varfile

See the following section, for more information on the options you can configure in the response file.

Installation Configuration Settings

Whether running the installer in GUI mode, console mode, or by passing a response file, the options you have while performing the

Copyright © AppDynamics 2012-2017 Page 44 installation are the same.

The installer does some basic system checking of your environment as it performs the installation. It notifies you if it encounters conditions that need to be addressed.

The installer configures listening ports for Controller. In GUI and Console mode, the installer checks to make sure that each port it suggests is available on the system before suggesting it. If a default port number is already in use, it chooses the next highest available number for the port. For example, if the usual default port of 8090 for the Controller application server is taken, the installer suggests 8091. You only need to edit a default port number if you know it will cause a future conflict or if you have some other specific reason for choosing another port.

Due to browser incompatibilities, AppDynamics recommends using only ASCII characters for usernames, passwords, and account names.

Configure your installation using these response file settings:

GUI Mode Response File variable Description Screen or Option Label

License disableEULA Scroll to the end of the license agreement and accept the agreement to continue the Agreement installation or do not accept to discontinue. Set to true in the response file to enable installation.

Consent to enableUDC True to permit data statistics collection from your Controller. The data collected collect usage helps AppDynamics improve its products and services. information

Destination sys.installationDir Directory in which to install the Controller. If not specified, it is installed into the directory default directory of the target environment, such as AppDynamics/Controller under the user's home on Linux systems.

Database Root mysqlRootUserPassword The password of the user account that the Controller uses to access its MySQL User's database. Do not use the "!", "|", or "$" characters in this password. Password

Application serverHostName Server host name or IP address that AppDynamics agents and the AppDynamics UI Server Host use to connect to the Controller. Note that "localhost" and "127.0.0.1" are not valid Name settings.

This will be the host name (or IP address) that AppDynamics app agents and Controller UI browser clients use to connect to the Controller.

Application serverPort The primary HTTP port for the application server. If not specified, the installer uses Server Primary the default, 8090, if it is available on the current machine. If is not available, the Port installer suggests the next available port number.

This will be the port that AppDynamics app agents and Controller UI browser clients use to connect to the Controller.

Database databasePort The port for the Controller database. If not specified, the installer uses the default, Server Port 3388.

Application adminPort The application server's administration port. If not specified, the installer uses the Server Admin default, 4848. Port

Application jmsPort The application server's JMS port. This port is used for internal system Server JMS communication. If not specified, the installer uses the default, 7676. Port

Application iiopPort The application server's IIOP port. This port is used for internal system Server IIOP communication. If not specified, the installer uses the default, 3700. Port

Application sslPort The secure HTTP port for the application server. If not specified, the installer uses Server SSL the default, 8181. This is the secure alternative to the application server primary Port port. By default, it uses a built-in, self-signed certificate. For a production deployment, you should install a CA-signed certificate.

Copyright © AppDynamics 2012-2017 Page 45 Events Server restAPIPort The Events Server REST API port. This port is used for internal system REST API Port communication. 9080 by default.

Events Service restAPIAdminPort The Events Server REST API admin port. This port is used for internal system REST API communication. 9081 by default. Admin Port

Events Service elasticSearchPort The port on which the Elastic Search process listens. This port is used for internal Elastic Search system communication. 9200 by default. Port

Reporting reportingServiceHTTPPort The reporting service HTTP port. This port is used for internal system Service HTTP communication. 8020 by default. Port

Reporting reportingServiceHTTPSPort The reporting service HTTPS port. This port is used for internal system Service HTTPS communication. 8021 by default. Port

AppDynamics controllerTenancyMode The tenancy mode: single or multi. Single tenancy is recommended for most Controller installations. See Controller Tenant Mode and Accounts for more information. Tenancy Mode Configuration Multi-tenancy allows you to partition the AppDynamics environment into separate accounts in the UI.

Controller root rootUserPassword The Controller root user password. The root user is a Controller user account with User's privileges for accessing the system Administration Console. Password This password is used for the admin user of the built-in Glassfish application server as well. The Glassfish admin user lets you access the Glassfish console and the asadmin utility. See Access the Administration Console for more information about the console.

Allowed characters in the password are: a-z, A-Z, 0-9, ., +, =, @, _, -, $, |, :, #, ,, %, (, ), !, {, }

Verify rootUserRePassword The root user password repeated for validation purposes. Controller root User's Password

User Name (Ad userName The username of the administrator account in the Controller UI. This is the min User Setup administrator for the built-in account if single-tenant systems, or for the initial ) account for multi-tenant. See AppDynamics Administrator.

Usernames and password cannot include the @ or ! character.

Also note that if this account will be used to access the REST API, additional limitations on the use of special characters in usernames apply. See Users and Groups for more information.

Password (Ad password The admin user password. min User Setup )

Verify rePassword The admin user password repeated. Password (Ad min User Setup )

Account Name accountName For multi-tenancy mode, the name for the initial account in the system.

In the response file, ignored if controllerTenancyMode=single.

Copyright © AppDynamics 2012-2017 Page 46 Account accountAccessKey Your AppDynamics account access key. Access Key

AppDynamics controllerConfig The performance profile type for the instance. See Controller Hardware Controller Requirements for information about how the types correspond to sizes. Performance Profile The installer limits the choice of profiles to Demo and Small profile on 32-bit systems. When you submit your choice in GUI or console mode, the installer checks your system for minimum requirements and warns you if any are not met.

In the response file, use demo, small, medium, large, or extra-large for the profile types.

MySQL data realDataDir The path to the Controller's data directory. directory

Elastic Search elasticSearchDataDir The path to the Elastic Search file store. Elastic Search is used by Database data directory Monitoring features. By default, the directory is located in the Controller home. However, if you are putting the MySQL data directory in an alternate location, such as on a separate partition, the Elastic Search data directory would likely need to be put there as well.

If you are not sure whether you will use database monitoring, you can keep the default location for now and change the data directory location later, if needed, in the events_service/analytics-processor/conf/analytics-all.properties file.

AppDynamics haControllerType The high availability mode for this instance. For more information see Contr Controller High Availability oller High Availability. Configuration In the response file, use primary, secondary, or notapplicable. If not specified, defaults to notapplicable, which means that HA is not enabled.

n/a sys.languageId (Response file only). The language identifier for the system. en by default.

Verifying the Controller Installation

When the installation is finished, a status screen in the installer indicates that the installation is complete and the Controller is started.

Verify by navigating in a browser to the URL of the Controller UI:

http://:/controller

Log in using the credentials of the initial Controller administrator.

Next Steps

To access the Controller UI, you need to have a valid license. For a trial installation, the license file (license.lic) is bundled in your download. Otherwise, you may have acquired the license file from AppDynamics.

To apply a license file manually, copy the license file to the Controller home directory. After moving the license file, allow up to 5 minutes for the license change to take effect.

Verify that you have completed the following requirements based on your operating system:

For Linux, see Configure Linux for the Controller. For Windows, see Configure Windows for the Controller.

You may have completed some of the required configurations before you installed the Controller. For example, you cannot install the

Copyright © AppDynamics 2012-2017 Page 47 Controller on Linux without libaio.

In addition to the required configuration, you can also configure the Controller to use Java 1.8. By default, the Controller uses Java 1.7. For more information about how to change the Java version, see Platform Version Information.

Controller System Requirements

On this page:

Controller Operating System Support Java Controller Performance Profile Definitions Hardware Requirements per Performance Profile Calculating Node Count for .NET Environments Database Monitoring Considerations AppDynamics for Databases Requirements for Demo and Small Profile Environments Asynchronous Call Monitoring and End User Monitoring (EUM) Running the Controller on Virtual Machines Understanding Disk I/O Requirements Measure Disk I/O Linux file descriptor limit Virtual Memory Space Network Bandwidth Requirements Internationalization Support Reporting Service System Requirements

AppDynamics strongly recommends that you install the Controller on a dedicated machine for adequate stability and performance.

This page provides guidelines for the hardware and software requirements for the Controller machine, but keep in mind that each deployment is unique. Application behavior and features such as monitoring asynchronous threads and End User Monitoring increase the number of metrics per minute flowing to the Controller and can affect the requirements.

Controller Operating System Support

The Controller is supported on the following operating systems:

Linux (32 and 64 bit) Microsoft Windows (32 and 64 bit)

Red Hat Enterprise Linux (RHEL) 6.1, 6.2, 6.3, 6.4, Windows Server 2003 6.5, 6.6, 6.7, 7.0, 7.2, and 7.3 Windows Server 2008, Windows Server 2008 R2 CentOS 5.9, 6.1, 6.2, 6.3, 6.4, 6.5, 6.6, 6.7, and 7.0 Windows Server 2012 R1 Standard and Datacenter, Windows Fedora 14 Server 2012 R2 Standard and Datacenter Ubuntu 8, 12, 14 Windows 7 Pro Open SUSE 11.x Windows 8 SUSE Linux Enterprise Server 12 Cloud: Amazon EC2, Rackspace, Azure

You can use the following file systems for Controller machines that run Linux:

ZFS EXT4 XFS

A Controller installer is also available for 64-bit Mac OS machines. However, running the Controller on Mac OS is intended for instructional, development or demonstration environments only, and not meant for production use.

Java

The Controller host machine must be able to run the Java version that the Controller uses. The Controller runs Java 1.7 by default. The installer also packages Java 1.8. You can switch to Java 1.8 after you install the Controller. For more information, see Platform

Copyright © AppDynamics 2012-2017 Page 48 Version Information.

Controller Performance Profile Definitions

The Controller performs large-scale real-time data analysis and correlation for potentially many agents at a time. For best performance, the Controller must be installed on suitable hardware.

To determine the hardware needed for your Controller, first identify the expected performance profile for your deployment based on the following table. One you've identified a profile, match it to the hardware requirements specified in the following section.

Controller Performance Profile Maximum Metric Count/Minute Number of Nodes1 Number of Business Apps1

Demo 5 K 1 to 5 2

Small 25 K 6 to 10 4

Medium 250 K 11 to 50 8

Large 512 K 51 to 250 15

Extra Large2 1 Million 250+ 20+

1 The primary consideration in sizing a Controller deployment is the metric ingestion rate per minute, as shown in the second column. While the number of nodes and business apps in your deployment do not bear directly on system capacity, the numbers shown here are indicative of the corresponding profile sizes. The actual number of nodes or business applications the controller can handle vary depending on the node type, features used, and many other factors.

2 Deployments that exceed the Extra Large profile are possible, but you need to work closely with your AppDynamics account representative to determine hardware requirements and configuration and tuning measures needed to accommodate Extra Large or larger deployments.

Hardware Requirements per Performance Profile

The following table shows the hardware requirements typically required for a deployment by performance profile.

AppDynamics strongly recommends that you install the Controller on a dedicated machine. The hardware requirements described in this document assume that no other major processes are running on the machine where the Controller is installed, including no other Controllers. For the Extra Large profile and larger, the Controller must run on a dedicated machine.

Controller Performance Profile Supported OS platforms CPU RAM Disk storage

Demo Linux (32 & 64 bit) 2 CPU Cores 3 GB 50 GB Windows (32 & 64 bit) 1.5 GHz minimum

Small Linux (32 & 64 bit) 4 CPU Cores 4 GB 200 GB Windows (32 & 64 bit) 1.5 GHz minimum

Medium Linux (64 bit) 8 CPU Cores 16 GB 2 TB Windows (64 bit) 2.5 GHz minimum

Large Linux (64 bit) 24 CPU Cores 32 GB 4 TB Windows (64 bit) 2.5 GHz minimum

Extra Large Linux (64 bit) 24 CPU Cores 64 GB 8 TB 2.5 GHz minimum

Notes on the Profile Hardware Requirements:

Large and Extra Large Controllers are not supported on virtual machines or on systems that use network attached storage without prior approval by AppDynamics. These deployments require you to work closely with your AppDynamics account representative throughout the planning and installation process. For more information, contact your AppDynamics account representative. For Extra Large profile deployments and larger, you will need to work closely with your AppDynamics account representative for sizing, configuration, and additional considerations. For configuration information for Extra Large deployments, see Tuning for Large Scale Deployments. The Demo profile is for demonstration and evaluation environments only. An important limiting factor when considering deploying the Controller to a VM is the Disk I/O latency and throughput. For more, see additional information on I/O requirements.

Copyright © AppDynamics 2012-2017 Page 49 Disk sizing reflects the estimate that each metric requires approximately 7 MB of storage. This assumes default settings for the Controller, including default data retention settings and business transaction registration limits. The RAM recommendations leave room for operating system processes. However, the recommendations assume that no other memory intensive applications are running on the same machine. The actual RAM allocated by the installer per profile size for the Controller application server and database in gigabytes are 1.6, 3, 12, 24, and 48, respectively. Also see Controller Performance Profiles. The agent count requirements do not reflect additional requirements of using End-Us er Monitoring (EUM). See Additional Sizing Considerations for more information. The Controller is not supported on machines that use Power Architecture processors, including PowerPC processors.

To perform the installation, the Controller installer requires around 200 MB of free space to be available in the system's temporary directory. For more information, see Install the Controller.

For information on hardware requirements for Database Monitoring, see Event Store.

Calculating Node Count for .NET Environments

The .NET Agent dynamically creates nodes depending on the monitored application's configuration in the IIS server. An IIS server can create multiple instances of each monitored IIS application. For every instance the .NET Agent creates a node. For example, if an IIS application has five instances, the .NET Agent will create five nodes, one for each instance.

The maximum number of instances of a particular IIS application is determined by the number of worker processes configured for its application pool. Consider the following diagram:

Copyright © AppDynamics 2012-2017 Page 50 The diagram shows three application pools: AppPool-1, AppPool-2, and AppPool-3 with the following characteristics:

AppPool-1 and AppPool-3 can have a maximum of two worker processes (known as a web garden) AppPool-2 can have one worker process

If applications are assigned to application pools as follows:

AppPool-1: AppA and AppB

AppPool-2: AppC, AppD and AppE

AppPool-3: AppF

Use the following formula to estimate the number of nodes:

Copyright © AppDynamics 2012-2017 Page 51 AppPool-1 * number of applications * max number of worker processes + AppPool-2 * number of applications * max number of worker processes + AppPool-3 * number of applications * max number of worker processes + Windows service or standalone application processes = Number of Nodes

In this case the numbers would be:

AppPool-1 nodes = 4 (2 applications * 2 worker processes)

AppPool-2 nodes = 3 (3 applications * 1 worker process)

AppPool-3 nodes = 2 (1 application * 2 worker processes) for a total of 9 nodes.

To find the number of CLRs that will be launched for a particular .NET Application / App Pool:

1. Open the IIS manager and see the number of applications assigned to that AppPool. 2. Check if any AppPools are configured to run as a Web Garden. This would be a multiplier for the number of .NET nodes coming from this AppPool as described above.

Also see: http://technet.microsoft.com/en-us/library/cc725601(v=ws.10).aspx.

Database Monitoring Considerations

The following guidelines can help you determine additional disk and RAM required for the machine hosting the Controller that is monitoring the Database Agent. For very large installations, you should work with your AppDynamics representative for additional guidelines.

For on-premises installations, the machine running the Controller and Event Service will require (for data retention period of 10 days):

1 – 10 collectors: 2 GB RAM, Single CPU 10 – 20 collectors: 4 GB RAM, 2 CPUs More than 20 collectors: 8 GB RAM, 4 CPUs

Additional Sizing Considerations

Event Service

The Event Service is a file-based storage facility used by EUM, Database Monitoring, and Analytics. Database Monitoring uses the Events Service instance embedded in the Controller by default. The disk space required will vary depending upon how active the databases are and how many are being monitored.

For redundancy and optimum performance, the Events Service should run on a separate machine. For details on sizing considerations, see Events Service Sizing and Capacity Planning.

AppDynamics for Databases Requirements for Demo and Small Profile Environments

AppDynamics for Databases is an on-premise solution and can be installed on the same server as the AppDynamics Pro Controller, or on a different server. In either case it requires 1 CPU and 2GB of RAM to monitor a single database instance. If you install it on the Controller server, these resources are in addition to the Controller resources.

Asynchronous Call Monitoring and End User Monitoring (EUM)

Monitoring asynchronous calls and using End User Monitoring (EUM) typically increases the number of metrics collected. As a result:

Copyright © AppDynamics 2012-2017 Page 52 The Small profile is not supported for installations with extensive async monitoring and/or use of EUM. A Medium profile running 40+ agents may need to upgrade to a configuration closer to a Large profile if extensive async monitoring and/or EUM are added.

Specifically the features affect the workload of the system as follows:

Monitoring asynchronous calls increases the number of metrics per minute to a maximum number of 23000 per minute. EUM Web RUM can increase the number of individual metric data points per minute by up to 22000 Mobile RUM can increase the number of individual metric data points per minute by as much as 15 to 25K per instrumented application, if your applications are heavily accessed. The actual number depends on how many network requests your applications receive.

The number of separate EUM metric names saved in the Controller database can be larger than the kinds of individual data points saved. For example a metric name for a metric for iOS 5 might still be in the database even if all your users have migrated away from iOS 5. So the metric name would no longer have an impact on resource utilization, but it would count against the default limit in the Controller for metric names per application. The default limit for names is 200,000 for Browser RUM and 100,000 for Mobile RUM.

Running the Controller on Virtual Machines

Medium and smaller profile Controllers can run on virtual machines, but with a few additional considerations:

The memory allocation for the Controller's virtual machine must be fully reserved RAM. Reserve as much as possible of the total memory allocation. The following page describes how to configure reserved memory on VMWare: Set Memory Reservation on a Virtual Machine. The Controller license is bound to the MAC address of the host machine. To run the Controller on a virtual machine, you must ensure that the host virtual machine uses a fixed MAC address. The virtual machine that hosts the Controller must meet hardware resource requirements that are equivalent to physical machines, including I/O performance. Running a large or extra large profile Controller on a virtual machine is not supported, without prior approval by AppDynamics. These deployments require you to work closely with your AppDynamics account representative throughout the planning and installation process. For more information, contact your AppDynamics account representative.

To test whether your virtualized storage subsystem is providing the required I/O capacity, see information on disk I/O rate tests.

Understanding Disk I/O Requirements

One of the most important indicators of a machine's ability to support the performance requirements of a Controller is the machine's disk I/O performance.

The average write latency for the disk that hosts the Controller should not exceed 3 milliseconds while the Controller is under sustained load. Be sure to test I/O latency on the machine for the Controller under load to best understand the hardware needed for your deployment.

The AppDynamics Controller always writes and reads using the following block sizes:

Read block size: 16 K Write block size: 64 K

When considering I/O performance, keep in mind that while onboard disks typically satisfy I/O performance requirements, in a SAN environment the bottleneck in terms of I/O write and read speed becomes the latency between the CPU and the SAN-based storage.

AppDynamics strongly recommends that you avoid installing the Controller or putting the Controller's data directory, db/data, on an NFS-mounted filesystem. NFS adds latency and throughput constraints that can affect Controller performance and even lead to data corruption.

Similarly, you should avoid iSCSI or other SAN technologies that are subject to quality of service issues from the underlying network.

For best performance, if not using on-board disk storage, consider using a robust Fiber Channel, FCoE, or iSCSI over 10Gbit Ethernet SAN. In all cases, be sure to thoroughly load test the deployment with real-world traffic load before putting the Controller into a live environment.

Measure Disk I/O

Copyright © AppDynamics 2012-2017 Page 53 To measure the disk I/O performance, AppDynamics recommends using the free fio tool. For information on fio, see the documentation for fio.

Linux file descriptor limit

For Linux environments, it is important to set the file descriptor limit to at least 65535. See information on limiting file descriptors on Linux in Configure Linux for the Controller.

Virtual Memory Space

The virtual memory size (swap space on Linux or Pagefile space on Windows) should be at least 10 GB on the target system, and ideally 20 GB.

Be sure to verify the size of virtual memory on your system and modify it if it is less than 10 GB. Refer to documentation for your operating system for instructions on modifying the swap space or Pagefile size.

Network Bandwidth Requirements

See Install and Administer Agents for information on bandwidth usage in an AppDynamics deployment.

Internationalization Support

The Controller and App Agents provide full internationalization support, with support for double- and triple-byte characters. This support provides the following abilities:

Controller UI users can enter double- or triple-byte characters into text fields in the UI The Controller can accept data that contains double- or triple-byte characters from instrumented applications

Reporting Service System Requirements

The Reporting Service is a standalone Controller process responsible for generating and transmitting reports.

The Reporting Service relies on certain system libraries and resource that are usually included in standard Linux distributions. However, certain lightweight flavors of Linux may be lacking the requirements, primarily font libraries. In this case, errors in the reporting server log will indicate missing components, such as a missing libfontconfig.so file.

The operating systems in which this type of error has been observed and the steps for resolving the issue by installing the missing component include:

Operating System To resolve...

CentOS 6.1, 6.2; CentOS 6.3, 6.4, and 6.5, Fedora 14 $ yum install urw-fonts

Ubuntu 8, 12, 14 $ sudo apt-get update

$ sudo apt-get install libfreetype6 libfreetype6-dev libfontconfig

Ubuntu 13 $ sudo apt-get install libfontconfig

Controller Sizing FAQ

Q: How do I determine if my Controller is at or near its ability to adequately handle its metric load?

A: The following are indications of Controller performance issues:

1. The Controller UI performs slowly. For short time ranges, such as 15 or 30 minutes, responses that take longer than 10 to 20 seconds can indicate that your Controller is under stress. 2. When the Controller's metric reporting lags 7 to 10 minutes behind the current time, it can be an indication that your Controller

Copyright © AppDynamics 2012-2017 Page 54 2.

is under stress. A lag of about 3 to 5 minutes is normal. 3. When monitoring the Controller environment, you see that CPU, memory, and disk metrics are at about 75% capacity.

Q. How do I size an on-premise Events Service deployment?

A: The Events Service sizing is based on data ingestion rate. For details, see Events Service Sizing and Capacity Planning.

Tuning for Large Scale Deployments

On this page:

Linux Settings File System and RAID Recommendations Glassfish Configuration MySQL Configuration Iterative Tuning Controller Configuration Top Summary Stats Configuration

A large scale AppDynamics deployment typically requires some additional performance tuning, both to the Controller configuration and to the host environment. A large scale deployment is one in which the Controller monitors 250 or more nodes. This matches the Extra Large performance profile, as defined in Controller Performance Profiles.

The recommendations on this page are provided as general guidelines only. For details and advice specific for your Extra Large deployment, you should work closely with your AppDynamics account representative.

In addition to the guidelines and configuration settings described here, it is especially important that any machine that hosts the Controller in high workload environments meets Controller System Requirements.

Linux Settings

Use the following settings for the operating system on which the Controller runs:

Use the Deadline scheduler. Configure swappiness. See information on configuring swappiness on Linux in Configure Linux for the Controller. Set the open file limit (ulimit) to 819200 or greater. See information on limiting file descriptors on Linux in Configure Linux for the Controller. Set the per-process open file limit for soft and hard limits to 819200 or greater. Allow Web server to retry longer during stalls by setting higher TCP timeouts. If using the Splunk extension, increase the maximum number of user processes to 2048.

File System and RAID Recommendations

Recommended file system: XFS. Recommended RAID version: There are only minor performance differences between RAID 5, 6, and 10. The reasons to use one over another are purely for level of safety and space requirements. The more disks you have in any RAID configuration, the better the performance, since you have more to work concurrently in doing disk I/O. Recommended RAID configuration: Use a RAID Controller with a Battery Backup Unit (BBU) to allow the database to use O_DIRECT sync mode as the flush method for faster disk writes. Never set O_DIRECT without a BBU. You can specify this setting using the innodb_flush_method variable in /db/db.cnf. When not specified in the file (as is the case by default), MySQL uses the default flush method, fdatasync. See the MySQL documentation for more information. Disable individual hard drive from using onboard cache. All caching is to be done by RAID Controller. For Storage Area Network (SAN) deployments, use multipath I/O for optimal performance and availability. For either SAN or RAID controller systems, use the following file system mount options for your database storage mountpoint: "noatime,nodiratime,nobarrier,data=writeback" On board disks are the easiest option to assure performance. When you use remote disks (such as SAN or NAS) the configuration must be able to support very heavy disk I/O with very low latency. Requirements for the Controller database are similar to any production database that you run. LVM partitions can be useful especially for doing quick hot backups with LVM snapshots.

Copyright © AppDynamics 2012-2017 Page 55 Glassfish Configuration

You can edit most Glassfish setting in the following configuration file: /appserver//domains/domain1/config/domain.xml

After modifying the file, you will need to restart the Controller to have your changes take effect.

The following section provides more settings relevant to the Controller. For complete information about configuring Glassfish see the Glassfish documentation.

To update the Glassfish settings

1. Set the thread count (XX) to a value of 12 X # of CPU cores. In the tag change the line containing "http-thread-pool" to:

If your Controller instance was upgraded from an earlier version, delete the following element if present, as it applies to Glassfish v2 only:

2. Replace the TCP transport element under server-config with the following. Replace the XX value in the acceptor-threads attribute with the number of cores in your system.

This sets the depth of the connection pool queue to 32K to allow for connections to queue up instead of being dropped during peak load bursts.

Note that there are two transport elements with the name value of tcp in the file. Be sure to modify the first one, below the server-config element.

3. If your Controller instance was upgraded from an earlier version, delete the following element if present, as it applies to Glassfish v2 only:

4. Increase the buffer size. In the section under , change the next line to:

Copyright © AppDynamics 2012-2017 Page 56 4.

5. Set the heap size to approximately 20 to 40% of the RAM size on your machine, using the higher end of the range if the RAM size is lower relative to the system requirements and a lower percentage for higher RAM sizes. Set Xmn to be about a third of the Xmx setting.

-Xmx30g -Xms30g -Xmn10g

The combined amount of RAM specified in this step and in your MySQL configuration should not exceed 80% of the total RAM for the system. 6. Replace the garbage collection settings with these specially tuned settings:

-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+ScavengeBeforeFullGC -XX:TargetSurvivorRatio=80 -XX:SurvivorRatio=6 -XX:+UseBiasedLocking -XX:MaxTenuringThreshold=15 -XX:ParallelGCThreads=16 -XX:+OptimizeStringConcat -XX:+UseStringCache -XX:MaxPermSize=768m -XX:+UseCompressedOops -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70

7. Optionally, add garbage collection (GC)-related output options. For example:

Copyright © AppDynamics 2012-2017 Page 57 7.

-verbose:gc -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintClassHistogramBeforeFullGC -XX:PrintFLSStatistics=1 -XX:+PrintPromotionFailure -Xloggc:/var/log/controller/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=2 -XX:GCLogFileSize=512m -XX:+PrintTenuringDistribution

MySQL Configuration

The Controller uses MySQL as its the database server. You should tune MySQL for scalability and better performance.

To configure MySQL

1. Shut down the Controller. 2. Configure the settings in the database configuration file, /db/db.cnf, with these values. Since some of these name/value pairs are already in the db.cnf file, search in the file for each of the variables by name. If a variable already exists, simply replace the assigned value. Otherwise, add the name/value pair to the file.

thread_cache_size=120 table_definition_cache=500 open-files-limit=40960 innodb_open_files=3000 table_open_cache=4000 lock_wait_timeout=300 query_cache_size=0 long_query_time=4 innodb_flush_method=O_DIRECT # ONLY DO IF YOU HAVE BATTERY BACKED RAID ! innodb_buffer_pool_size= 76266M # This should be based on available RAM. This value should be approximately 40 to 60% of the RAM size.

Be sure to modify the innodb_buffer_pool_size value with the appropriate amount of RAM for your system. As noted, the value you use should be in the range of 40% to 60% of the total RAM on your system.

The combined amount of RAM specified in this step and in your GlassFish configuration should not exceed 80% of the total RAM for the system. 3. Restart the Controller. 4. Your database should now be running with the new settings, but we suggest that you verify your database settings. To do so: a. Log in to the database:

/bin/controller.sh login-db

Copyright © AppDynamics 2012-2017 Page 58 4.

b. Enter the "SHOW VARIABLES;" command and verify that the variables are assigned the values you expect as reported in the command output.

Iterative Tuning

After you perform the configuration for the Glassfish server and MySQL, use the following recommendations to perform iterative tuning for your large-scale deployment:

Run the Controller for at least 3 days. Examine Controller heap utilization and garbage collection metrics in the Controller system account. If the Controller is performing less than 3 major garbage collections per day, decrease the size of the java heap and increase the innodb_buffer_pool_size by the same amount. If the Controller is performing more than 6 major garbage collections per day, decrease the size of the innodb_buffer_pool_size and increase the java heap by the same amount. If the Java heap ends up being more than twice the size of the innodb_buffer_pool, consider adding RAM to the Controller host to handle the workload.

Controller Configuration

You need to adjust some Controller settings for configurations that handle large amounts of traffic. This includes increasing the events, snapshots, and buffer sizes, increasing the read and write thread counts, decreasing the node retention and node permanent deletion periods, etc.

To change the Controller settings

1. Make sure Controller is running. 2. Log into the Controller Administration console at :/controller/admin.jsp using the root password. See Access the Administration Console. 3. In the Administration Console, select Controller Settings. 4. Modify each of the following settings, clicking Save to save each update. disable.historic.transactional.flow = true If true, flow maps in the UI show activity from the last hour at most and certain views in the UI present information based on cached data only, regardless of the time range selected in the UI. The time ranges available depend upon the cache.retention.period property value, which is set to 24 hours by default. When a user selects a time range that exceeds the cache retention period in the UI and navigates to a view affected by the limit, such as the business transactions list, a notification message appears indicating the actual time range displayed. This is false by default, but for large deployments this should always be set to true to prevent performance issues and timeout errors in the UI. business.transaction.retention.period = 48 events.buffer.size = 50 snapshots.buffer.size = 200 metrics.buffer.size = 300 MB For 1000 agents that ingest 100,000 metrics per minute, use 300 MB for the buffer size. Use this value as a guideline for tuning your Controller. node.permanent.deletion.period = 168 node.retention.period = 6 read.thread.count = 3 Number of threads to use simultaneously to do READS from the database. Set this to 20% of the number of CPU cores but not greater than 4. write.thread.count = 4 Number of threads to use simultaneously to load data into the database. Set this to the same value as read.thread.count but not greater than 4.

Top Summary Stats Configuration

You can tune the number of records that are retained to determine the 10 Top Summary Stats (TSS) with the following property:

tss.detailstring.retention.size: Multiply the value you set this property to by 50,000 to determine the total count of records for TSS before the records are purged.

You can find the property in the following file: /tools/config/controller-config-.properties.

As a starting point, set the value for the property to 20, which means that 1,000,000 records are retained. If you do not see 10 summary stats, increase the the value so that more records are retained.

Copyright © AppDynamics 2012-2017 Page 59 Controller Port Settings

On this page:

Controller Ports Reinstalling the Controller with New Port Settings Editing Controller Port Configurations

When deploying AppDynamics, whether using an on-premise or SaaS Controller, you will likely need to open ports in a network firewall or configure a load balancer to enable communication between the Controller and App Agents and client browsers.

For SaaS, you only need to adjust your infrastructure to accommodate the HTTPS port provided to you by AppDynamics. For an on-premises deployment, however, you may need to make additional adjustments.

This topic lists the ports used by the Controller.

Controller Ports

The Controller uses the following internal ports with a few exceptions:

Port Name Default Exceptions

Database server port 3388

Application server admin 4848 port

Application server JMS port 7676

Application server IIOP port 3700

Events Service REST API 9080 If the Events Service and Controller are on different hosts, you will need to configure the port port in the firewall or load balancer.

Events Service REST API 9081 If the Events Service and Controller are on different hosts, you will need to configure the admin port port in the firewall or load balancer.

Events Service Elastic 9200 Search port

Reporting service HTTP 8020 port

Reporting service HTTPs 8021 port

EUM server port (HTTP) 7001 If EUM and the Controller are on different hosts, you will need to configure the port in the firewall or load balancer.

EUM server SSL port 7002 If EUM and the Controller are on different hosts, you will need to configure the port in the (HTTPS) firewall or load balancer.

The Controller uses the following external ports:

Port Name Default

Application server primary port (HTTP) 8090

Application server SSL port (HTTPS) 8181

EUM Cloud SSL port (HTTPS) 443

At installation time, the install wizard may suggest other port numbers if the default ports are used on the target system, or you can enter different ports manually. After installation, you can change the port settings by either reinstalling the Controller or by editing the

Copyright © AppDynamics 2012-2017 Page 60 port configuration as defined in the underlying GlassFish application server, as described in the following sections.

Reinstalling the Controller with New Port Settings

Use the reinstall procedure if you want to modify connection settings without editing configuration files.

To reinstall the Controller with new port settings

1. Shut down the Controller. For instructions see Start or Stop the Controller. 2. Create a complete backup of the Controller installation directory and all of its subdirectories. 3. Reinstall the Controller in the same installation directory as your original Controller and choose the new port settings while running the installer. For instructions see Install the Controller or Configure Windows for the Controller.

Even though the versions are the same, this step is handled as an upgrade by the Controller installer.

Editing Controller Port Configurations

If reinstalling the Controller is not convenient, you can edit the ports manually by editing configuration files used by the application server for the Controller domain.

The following sections list the settings you need to modify to change a port. After making manual edits to configuration files, you will need to restart the Controller to have your changes take effect.

The port settings are in the domain configuration file:

appserver/glassfish/domains/domain1/config/domain.xml

Change the Primary Server Listening Port

1. In domain.xml, change the port number as it appears in these locations: The value of the network-listener element with the attribute id="http-listener-1" for the primary listening port, or http-listener-2 for the secure listening port to the new port setting. The JVM argument values for the Controller HTTP port and Controller services port under the config element named server-config. 2. For each deployed agent, navigate to proxy/conf in the agent home directory and change the controller-port value in controller-info.xml. 3. Change the serverPort or sslPort number in the Controller home directory: .install4j/response.varfile. This ensures that the new port number will not be overwritten upon upgrade.

Change the Database Port

1. In domain.xml, change the database listening port where it appears under the jdbc-connection-pool element named controller_mysql_pool. It appears as the value of the property named portNumber. 2. Edit the file appserver/glassfish/domains/domain1/imq/instances/imqbroker/props/config.properties to change the "imp.persist.jdbc..property.url" variable so that it includes the new port number. This variable is the JDBC connection string. 3. In db/db.cnf, set the "port=" variable to your new port setting. 4. In the bin/controller.bat (.sh), change the "DB_PORT" variable to your new port setting. 5. Edit .install4j/response.varfile to change the databasePort value to the new port. This ensures that the new port number will not be overwritten in a future Controller upgrade.

Change the Glassfish Admin Listening Port

1. In domain.xml, change the port attribute value of the http-listener element to the new port. This is the element with an id attribute value of "admin-listener". 2.

Copyright © AppDynamics 2012-2017 Page 61 2. Also in the Controller home directory, change the adminPort value in .install4j/response.varfile. This ensures that the new port number will not be overwritten in a future Controller upgrade.

Change the JMS Port

1. In domain.xml, change the port attribute value for the jms-host element with the name attribute of default_JMS_host. 2. Change the jmsPort value in .install4j/response.varfile. This ensures that the new port number will not be overwritten in a future Controller upgrade.

Change the IIOP Listening Port

1. In domain.xml, edit the port attribute value of the iiop-listener element with an id attribute of orb-listener-1. 2. Change the iiopPort value in /.install4j/response.varfile. This ensures that the new port number will not be overwritten in a future Controller upgrade.

Configure Linux for the Controller

On this page:

Install libaio on Linux Install Install the Reporting Service Requirements Configure User Limits in Linux

Related Pages:

Install the Controller as a Linux Service

This topic describes how to prepare your Linux operating system for running the AppDynamics Controller.

Install libaio on Linux

Before it begins Controller installation, the installer checks whether the target Linux system has the libaio library installed. This library provides for asynchronous I/O operations on the system.

This topic provides instructions on how to install libaio for some common flavors of Linux operating system.

Red Hat and CentOS

Use the following command to install the libaio package:

yum install libaio

Ubuntu

Use the following command to install the libaio package:

sudo apt-get install libaio1

Fedora

Install the RPM for the libaio package from the Fedora website.

Copyright © AppDynamics 2012-2017 Page 62 Debian

For Debian, you can use a package manager such as APT to install libaio (as described for the Ubuntu instructions above).

It is possible, although not generally advisable, to install libaio manually, as illustrated in the following steps.

The wget command shown below retrieves the current software package for libaio. Before performing the installation manually, get the URL of the latest version of libaio as hosted at a software download mirror location convenient to you, as described here: https://www.d ebian.org/distrib/packages.

mkdir /usr/libaio cd /usr/libaio wget http://ftp.us.debian.org/debian/pool/main/liba/libaio/libaio1_0.3.10 9-3_i386.deb ar -x libaio1*.deb tar zxvf data.tar.gz sudo cp lib/* /emul/ia32-linux/lib/

Install netstat

Verify that your distribution of Linux includes the netstat network utility. If it does not, install the utility. The Controller installation uses netstat to determine whether MySQL processes are running.

For example, you can install the package that includes netstat with the following command on CentOS:

yum install net-tools

Install the Reporting Service Requirements

Fonts

To use Reports, you must have Fontconfig and FreeType installed as well as at least one sans-serif font.

For example, run the following command if the Controller host runs CentOS:

yum install fontconfig freetype freetype-devel fontconfig-devel libstdc++ urw-fonts

GNU C Libraries

The Reporting Service requires GLIBCXX_3.4.9 or later and GLIBC_2.7 or later to run.

For more information and download instructions, see https://www.gnu.org/software/libc/.

Configure User Limits in Linux

AppDynamics recommends the following hard and soft per-user limits in Linux:

Open file descriptor limit (nofile): 65535 Process limit (nproc): 8192

Copyright © AppDynamics 2012-2017 Page 63 The following log warnings may indicate insufficient limits:

Warning in database log: "Could not increase number of max_open_files to more than xxxx". Warning in server log: "Cannot allocate more connections".

To check your existing settings, as the root user, enter the following commands:

ulimit -S -n ulimit -S -u

The output indicates the soft limits for the open file descriptor and soft limits for processes, respectively. If the values are lower than recommended, you need to modify them.

Where you configure the settings depends upon your Linux distribution:

If your system has a /etc/security/limits.d directory, add the settings as the content of a new, appropriately named file under the directory. If it does not have a /etc/security/limits.d directory, add the settings to /etc/security/limits.conf. If your system does not have a /etc/security/limits.conf file, it is possible to put the ulimit command in /etc/profile. However, check the documentation for your Linux distribution for the recommendations specific for your system.

To configure the limits:

1. Determine whether you have a /etc/security/limits.d directory on your system, and take one of the following steps depending on the result: If you do not have a /etc/security/limits.d directory: a. As the root user, open the limits.conf file for editing:

/etc/security/limits.conf

b. Set the open file descriptor limit by adding the following lines, replacing with the operating system username under which the Controller runs:

hard nofile 65535 soft nofile 65535 hard nproc 8192 soft nproc 8192

If you do have a /etc/security/limits.d directory: a. As the root user, create a new file in the limits.d directory. Give the file a descriptive name, such as the following:

/etc/security/limits.d/appdynamics.conf

b. In the file, add the configuration setting for the limits, replacing with the operating system username under which the Controller runs:

hard nofile 65535 soft nofile 65535 hard nproc 8192 soft nproc 8192

Copyright © AppDynamics 2012-2017 Page 64 2. Enable the file descriptor and process limits as follows: a. Open the following file for editing:

/etc/pam.d/common-session

b. Add the line:

session required pam_limits.so

3. Save your changes to the file.

When you log in again as the user identified by login_user, the limits will take effect.

Install the Controller as a Linux Service

On this page:

Installing the Service Uninstall the Service

After installing the Controller, you can configure it to start automatically on Linux using a script included in the AppDynamics High Availability (HA) toolkit, install-init.sh.

While the script is included in the HA toolkit, you can use the script to install the Controller as a service even if you are not deploying an HA environment.

To run the script you need to be logged in as the root user, since the script creates a new AppDynamics user in your sudoers group based on your existing Controller root user. The script starts the Controller as this user.

In addition, there needs to be a C on the system. You can install a C compiler using the package manager for your operating system. For example, on Yum-based Linux distributions, you can use the following command to install the GNU Compiler, which includes a C compiler:

sudo yum install gcc

Installing the Service

To install the Controller as a Linux service using the HA script:

1. Create a directory named HA in your Controller home directory and install the toolkit in the directory as described in Using the High Availability (HA) Toolkit. 2. Save and encrypt the MySQL password:

cd HA ./save_mysql_passwd.sh

This will prompt for the MySQL password, which will then be saved in encrypted form.

Copyright © AppDynamics 2012-2017 Page 65 2.

New controllers remove the db/.rootpw file from the controller installation for security reasons. Only the internal scripts use the password to access the database. As the HA package requires frequent database access, it is impractical to prompt for the password every time the database is use. Therefore, there is nothing else you would have to do with the password.

3. Log in as the root user. For example, from the command line:

su -u root

4. Run the script from the Controller home as follows:

bash HA/install-init.sh

The Controller now starts automatically when the Controller machine starts up.

You can use these commands to start, stop, or restart the service manually, or check the status of the service, respectively:

/etc/init.d/appdcontroller {start|stop|restart|status}

Uninstall the Service

You can run uninstall-init.sh (also included in the HA toolkit) to remove the service.

Configure Windows for the Controller

On this page:

Virus Scanners Windows Defender Scanning Windows Indexing Service Windows Update

Related pages:

Uninstall the Controller

This page provides operational and setup guidelines for running the Controller on Windows. To avoid system conflicts or data corruption, you may need to make some system change on Windows host operating systems, as described here.

Virus Scanners

Configure virus scanners on the target machine to ignore the AppDynamics database directory. Code is never executed from the data directory, so it is generally safe to exclude this directory from virus scanning. The default location of the data directory is \db\data.

Also configure virus scanners to trust the Controller launcher, database executable, reporting service launcher, and events service (analytics processor) launchers. The launcher names are:

Controller launcher: AppDynamicsDomain1Service.exe MySQL executable: mysqld.exe Events service launcher: analytics-processor.exe Reporting service launcher: appdynamicsreportingservice.exe

Copyright © AppDynamics 2012-2017 Page 66 Windows Defender Scanning

Exclude the Controller data directory (\db\data) from scanning by Windows Defender. If you are not sure whether Windows Defender is running on the system, check for it in your local Services list. You can either configure the Controller data directory to be excluded in the Windows Defender Control Panel, or disable the service altogether if it is not needed. For details on how to view services and exclude directories in Windows Defender, refer to the documentation for your version of Windows.

Windows Indexing Service

Ensure that the Windows indexing service is configured to ignore the Controller data directory (\db\data):

The data directory does not contain any files that are meaningful to the indexer, so it can be excluded from indexing. To exclude the directory from indexing, you can add the directory to the excluded directories list in the Indexer Control Panel, disable indexing in directory preferences, or stop the indexing service entirely.

To add the directory to the excluded directory list, follow these steps:

1. From the Control Panel Indexing Options dialog, click the Modify button. 2. In the Indexed Locations dialog, navigate to and select the Controller data directory. 3. Clear the check box for the data directory and click OK.

Windows Update

Configure the Windows Update preferences so that the server is not automatically restarted after an update. To configure the restart policy:

1. Open the Local Group Policy Editor window (search for and run the "gpedit" executable). 2. Navigate to the Windows Update component. In the tree, you can find it under Local Policy > Computer Configuration > Administrative Templates > Windows Components. 3. Double-click on the "No auto-restart..." setting. 4. Select the Enabled option and click Apply.

Deploy the Controller to Production

On this page:

Deployment Tasks Network Requirements

Related pages:

Install the Controller

Deploying AppDynamics to its live operating environment introduces requirements and considerations beyond those applicable to an evaluation or staging installation.

Security, availability, scalability, and performance all play an important role in production deployment planning. The system resources of the machine that hosts the Controller in a live environment must be able to support the expected workload.

This topic covers considerations applicable to deploying AppDynamics to its live environment.

Deployment Tasks

A typical deployment of AppDynamics involves these tasks and considerations:

Ensure that target systems meet the Controller System Requirements for the Controller's expected workload. For high volume environments, tune the OS and Controller settings for high workload. Also see Configure Linux for the Controller for recommendations on swappiness and other settings. Implement Controller High Availability to ensure service continuity in the event of a failure of the

Copyright © AppDynamics 2012-2017 Page 67 Controller server. Configure the network environment. If deploying the Controller with a reverse proxy, configure passthrough of Controller traffic. Also note other Network Requirements for the deployment environment. Implement security requirements for your environment. If clients will connect to the Controller by HTTPS, install your custom SSL server certificate on the Controller. Generate a password management strategy for the built-in system accounts in the Controller and platform. Make sure the mail server is properly configured for the Controller in the target environment and define your alerting strategy. Devise your backup strategy. A typical backup strategy consists of frequent partial backups with intermittent full backups. Plan your configuration maintenance and enhancement strategy. Changes to the configuration should be staged in a non-critical environment, and rolled into the live environment only after thorough testing. The AppDynamics UI and REST API o ffer the ability to export and import configuration settings from various contexts. Deploying App Agents is likely to be an ongoing task, especially in dynamic environments where monitored systems are regularly taken down and new ones brought up. There are two basic strategies for deploying large numbers of App Agents across a managed environment: 1. Deploy the agents independently of the application inside the application server. This method ensures that re-deployments of the application do not overwrite the agent deployment. 2. Integrate deployment of AppDynamics agents into the deployment of applications. This more sophisticated approach requires modifying the existing application deployment automation scripts. For details, see: Automate Java Agent Deployment Unattended Installation for .NET

Network Requirements

Deploying the Controller often calls for configuration changes to existing network components, such as to firewalls or load balancers in the network. If the Controller will reside behind a load balancer or reverse proxy, you need to set up traffic forwarding for the Controller. You may also need to open ports used by AppDynamics on firewalls or any other device through which traffic must traverse.

The following are general considerations for the environment in which you deploy AppDynamics. See Quick Install for other network configuration requirements.

Correlation HTTP Header

AppDynamics adds a custom header to traffic in the monitored environment named singularityheader. This header enables AppDynamics to correlate traffic across tiers. It's important to ensure that any load balancer, proxy or firewall in the network between monitored tiers or between the tiers and the Controller preserves the header added by AppDynamics.

Clock Management

To ensure consistent event time reporting across the AppDynamics deployment, the App Agents attempts to synchronize its time with the Controller time.

It does so by retrieving the time from the Controller every five minutes. It then compares the Controller's time to its own local machine's clock time. If the times are different, whether ahead or behind, it applies a time skew based on the difference to the timestamps for the metrics it reports to the Controller.

If, despite the agent's attempt to report metrics based on the Controller time, the Controller receives metrics that are time-stamped ahead of its own time, the Controller rejects the metrics. To avoid this possibility, AppDynamics recommends maintaining clock-time consistency throughout your monitored environment.

Deploy with a Reverse Proxy

On this page:

About these Instructions General Guidelines Using as a Simple HTTP Reverse Proxy Using Apache as a Reverse Proxy Configure SSL Termination at the Reverse Proxy Using SSL from the Reverse Proxy to the Controller

Copyright © AppDynamics 2012-2017 Page 68 A common data center design involves putting backend services such as the AppDynamics Controller in a network behind a DMZ. For the Controller, a network proxy residing in the DMZ acts as an end-point for the Controller by presenting a virtual IP address for the Controller, since App Agents and UI browser clients connect to the Controller through the virtual IP.

In addition to providing a security layer, a reverse proxy allows you to move a Controller to another machine or switch between high availability pairs without having to reconfigure and restart monitored applications.

In the sample scenario shown by the diagram, the reverse proxy listens for incoming requests on a given path, /controller in this case, on port 80. It forwards matching requests to the HTTP listening port of the primary Controller at appdhost1:8090. In terms of network impact in this scenario, switching active Controllers from the primary to the secondary in this scenario only requires the administrator to update the routing policy at the proxy so that traffic directed to the secondary instead of the primary.

If clients use SSL, the reverse proxy can terminate SSL connections or maintain SSL through to the Controller. Terminating SSL at the proxy removes the processing burden from the Controller machine. It can also simplify administration for the data center as a whole by centralizing SSL key management to a single point and it allows you to use alternative PKI infrastructures like OpenSSL.

About these Instructions

There are various types of devices and software that can act as a reverse proxy. For example, Nginx, HAProxy, Apache Web Server, or an application-level load balancer such as F5's BIG-IP can all act as a reverse proxy for the Controller.

This page provides general considerations for setting up the Controller with a reverse proxy. It also provides sample configurations for a few specific types of proxies.

It is important to note that this information is intended for illustration purposes only. The configuration requirements for your own deployment is likely to vary greatly, depending on the existing environment, the applications being monitored, and the practices and policies of your organization.

While AppDynamics supports Controllers that are deployed with a reverse proxy, AppDynamics Support cannot guarantee help with specific set up questions and issues particular for your environment or the type of proxy you are using. For this type of information, please consult the documentation provided with your proxy technology. Alternatively, try posting the question to the AppDynamics community.

General Guidelines

The following describe general requirements, considerations, and features for deploying the AppDynamics Controller and App Agents

Copyright © AppDynamics 2012-2017 Page 69 with a reverse proxy.

Set the deep link JVM option, -Dappdynamics.controller.ui.deeplink.url, to the value of the Controller URL using the modifyJvmOptions utility. Use either the hostname or IP address of the Controller host (if directly accessible to clients) or to the VIP for the Controller as exposed at the proxy in the following format:

modifyJvmOptions add Dappdynamics.controller.ui.deeplink.url=http[s]://controller.co rp.example.com[:port]/controller

Use the URI scheme (http or https), hostname and port number appropriate for your Controller. The Controller uses the deep link value to compose URLs it exposes in the UI. If terminating SSL at the proxy, also set the following JVM options:

-Dappdynamics.controller.services.hostName= -Dappdynamics.controller.services.port=

If the proxy sits between monitored tiers in the application, make sure that the proxy passes through the custom header that AppDynamics adds for traffic correlation, singularityheader. Most proxies pass through custom headers by default. For App Agents, the Controller Host and Controller Port connection settings should point to the VIP or hostname and port exposed for the Controller at the reverse proxy. For details see Connect the Controller and Agents. If using SSL from the agent to the proxy, ensure that the security protocols used between the App Agent and proxy are compatible. See the compatibility table for the SSL protocol used by each version of the agent. If the proxy (or other network device) needs to check for the availability of the Controller, it can use Controller REST resource at: http://:/controller/rest/serverstatus. If the Controller is active and if in high availability mode, is the primary, it returns an XML response similar to this one:

true AppDynamics AppDynamics Application Performance Management 003-008-000-000

If the Controller is in standby Controller this resource returns an error response.

The following sections provide notes and sample configurations for a few specific types of proxies, including Nginx and Apache Web Server.

Using Nginx as a Simple HTTP Reverse Proxy

Nginx is a commonly used web server and reverse proxy available at http://nginx.org/.

To use Nginx as a reverse proxy for the Controller, simply include the Controller as the upstream server in the Nginx configuration. If deploying two Controllers in a high availability pair arrangement, include the addresses of both the primary and secondary Controllers in the upstream server definition.

Copyright © AppDynamics 2012-2017 Page 70 The following steps walk you through the set up at a high level. It assumes you have already installed the Controller and have an Nginx instance, and you only need to modify the existing configuration to have Nginx traffic to the Controller.

The sample network layout represented in the configuration in these steps is:

To route Controller traffic through an Nginx reverse proxy

1. If the Controller is running, shut down the Controller. 2. Add a JVM option named -Dappdynamics.controller.ui.deeplink.url using the modifyJvmOptions utility. Set its value to the URL for the Controller, as described in the guidelines above. 3. If terminating SSL at the proxy, also set the -Dappdynamics.controller.services.hostName and -Dappdynamics.controller.servic es.port JVM options to the external DNS hostname for the Controller and the external port number, typically 443. 4. In the Nginx home directory on the reverse proxy machine, open the conf/nginx.conf file for editing. 5. In the configuration file, add a cluster definition the specifies each Controller as an upstream server. For example:

upstream appdcontroller { server 127.0.15.11:8090 fail_timeout=0; }

server { listen 80; server_name appdcontroller.example.com;

expires 0; add_header Cache-Control private;

location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_pass http://appdcontroller; } }

In the sample, the Controller resides on 127.0.15.11 and has the fully qualified domain name appdcontroller.example.com. 6. Restart the Nginx server to have the change take effect. 7. Restart the Controller.

Copyright © AppDynamics 2012-2017 Page 71 After the Controller starts, it should be able to receive traffic through Nginx. As an initial test of the connection, try opening the Controller UI via the proxy, that is, in a browser, go to http://:80/controller. For the App Agents, you'll need to configure their proxy host and port settings as described in the general guidelines above.

Using Apache as a Reverse Proxy

To use Apache as a reverse proxy, you need to make sure the appropriate Apache module is installed and enabled in your Apache instance. For HTTP proxying, this is typically mod_proxy_http. The mod_proxy_http module support proxied connections that use HTTP or HTTPS.

To configure Apache with mod_proxy_http

1. If the Controller is running, shut it down. 2. Add a JVM option named -Dappdynamics.controller.ui.deeplink.url using the modifyJvmOptions utility. Set its value to the URL for the Controller, as described in the guidelines above. 3. If terminating SSL at the proxy, also set the -Dappdynamics.controller.services.hostName and -Dappdynamics.controller.servic es.port JVM options to the external DNS hostname for the Controller and the external port number, typically 443. 4. On the machine that runs Apache, check whether the required modules are already loaded by your Apache instance by running this command:

apache2ctl -

In the output, look for proxy modules as follows:

proxy_module (shared) proxy_http_module (shared)

The proxy_module is a dependency for proxy_module_http. 5. If they are not loaded, enable the Apache module as appropriate for your distribution of Apache. For example, on Debian/Ubuntu: a. Type the following:

sudo a2enmod proxy_http

b. Restart Apache:

sudo service apache2 restart

6. Add the proxy configuration to Apache. For example, a configuration that directs clients requests to the standard web port 80 at the proxy host to the Controller could look similar to this:

Copyright © AppDynamics 2012-2017 Page 72 6.

Order deny,allow Allow from all

ProxyRequests Off ProxyPreserveHost On

ProxyPass /controller http://controller.example.com:8090/controller ProxyPassReverse /controller http://controller.example.com:8090/controller

7. Apply your configuration changes by reloading Apache modules. For example, enter:

sudo service apache2 reload

8. Start the Controller.

After the Controller starts, test the connection by opening a browser to the Controller UI as exposed by the proxy. To enable AppDynamics App Agents to connect through the proxy, be sure to set the proxy host and port settings in the proxy, as described in the general guidelines above. Also be sure to apply any of the other general guidelines described in the general guidelines above.

Configure SSL Termination at the Reverse Proxy

This section describes how to set up security when the client-side connection to the proxy uses SSL that's terminated at the proxy. This assumes that the proxy and Controller are in a secured data center and the App Agents or UI browser client connections are from a potentially insecure network.

Terminating SSL at a proxy offloads the burden of SSL processing from the Controller to the proxy. This configuration is strongly recommended when deploying the Controller to large scale, high workload environments. Terminating SSL at a proxy also provides the benefit of having a central point in the data center for security certificate and key management.

This section provides an sample configuration for Nginx, but the concepts translate to other types of reverse proxies as well.

Configure the Proxy for SSL Termination

Copyright © AppDynamics 2012-2017 Page 73 To perform SSL termination at the reverse proxy, you need to:

Ensure that the App Agents can establish a secure connection with the proxy. See Agent - Controller Compatibility Matrix for SSL settings for various versions of the agent. Ensure that the proxy includes a server certificate signed by an authority that is trusted by the agent. Otherwise, you will need to install the proxy machine's server key. If using .NET App Agents in your environment, verify that the reverse proxy server uses a server certificate signed by a certificate authority (CA). The .NET App Agent does not permit SSL connections based on a self-signed server certificate. Configure the proxy to forward traffic between it and the Controller to a secure port between it and the client. Configure a mixed-use (SSL and non-SSL) channel on the listening port on the Controller, as described in Configure the Controller for SSL Termination at the Proxy below. Apply this configuration to both the primary and secondary Controller, if you have Controllers deployed as an HA pair. The client App Agents and browser clients under this configuration must use the secure port to communicate with the Controller (i.e., the proxy). Configuring a mixed channel on the Controller as described here causes the agents to perform as if they were using a secure port. Therefore, you need to ensure that they use a secure port only.

A complete example configuration with Nginx performing SSL termination for the Controller would look something like this:

upstream appdcontroller { server 127.0.15.11:8191 fail_timeout=0; }

server { listen 80; server_name appdcontroller.example.com; return 301 https://$host$request_uri; } server { listen 443; server_name appdcontroller.example.com;

ssl on; ssl_certificate /etc/nginx/server.crt; ssl_certificate_key /etc/nginx/server.key);

ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; ssl_prefer_server_ciphers on;

expires 0; add_header Cache-Control private;

location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect http:// https://; proxy_pass http://appdcontroller; } }

Copyright © AppDynamics 2012-2017 Page 74 This example builds on the configuration shown in the simple passthrough example. In this one, any request received on the non-SSL port 80 is routed to port 443. The server for port 443 contains the settings for SSL termination. The ssl_certificate_key and ssl_certificate directives should identify the location of the server security certificate and key for the proxy.

The configuration also indicates the SSL protocols and ciphers accepted for connections. The security settings need to be compatible with the AppDynamics App Agent security capabilities, as described on the Agent - Controller Compatibility Matrix page.

To work with the Controller, you must configure the Controller with a mixed-channel HTTP listener, as described in the following section, Configure the Controller for SSL Termination at the Proxy.

Configure the Controller for SSL Termination at the Proxy

Be sure to set up the proxy to redirect server side traffic to the secure channel on the client side if you are performing this configuration. If you enable a mixed use channel, as described here, you need to be sure that the clients are configured to use the secure channel.

To configure a mixed protocol channel for an SSL proxy

1. Stop the Controller application server:

./controller.sh stop-appserver

2. Open the services-config.xml file for editing. You can find it in the following directory:

/appserver/glassfish/domains/domain1/applicati ons/controller/controller-web_war/WEB-INF/flex

3. Find the channel-definition element with an id value of my-secure-amf. 4. Replace the default value of the class attribute of the endpoint URL element, flex.messaging.endpoints.SecureAMFEndpoint, with a new value of flex.messaging.endpoints.AMFEndpoint. The resulting element should look like this:

false 10

5. Set the address at which clients connect to the Controller at the proxy. To do so, set these JVM options to VIP values with the modifyJvmOptions utility:

Copyright © AppDynamics 2012-2017 Page 75 5.

Dappdynamics.controller.services.hostName= Dappdynamics.controller.services.port=

Note that similar settings exist for the internal hostname and port settings of the Controller (named Dappdynamics.controller.hostName and Dappdynamics.controller.services.port), which are used by an app agent that is internal to the Controller. These should remain at the internal network hostname and port for the Controller. 6. Set the deeplink URL property as described in the General Guidelines above. 7. Start the application server:

./controller.sh start-appserver

Cookie Security

The Controller sets the secure flag on the X-CSRF-TOKEN cookie sent over HTTPS. The secure flag ensures that clients only transmit the cookies on secure connections.

If you are terminating HTTPS at a reverse proxy in front of the Controller, the Controller will not set cookie security by default, since connections to the Controller would occur over HTTP in this case.

To ensure cookie security, you should configure the reverse proxy to rewrite the set cookie statement so that the secure statement is included. The method for adding the secure flag varies depending on the type of the reverse proxy. The following illustrates a regular expression string replacement rule that could be used to accomplish this on HAProxy, as one example:

rspirep ^(set-cookie:.*) \1;\ Secure

Using SSL from the Reverse Proxy to the Controller

Have the proxy connect to the Controller with SSL requires a minor modification to the proxy configuration. Simply specify the use of HTTPS as the protocol to connect to the backend or upstream server. In other words, for the Nginx configuration, this simply requires you to modify the proxy_pass value as follows:

proxy_pass https://appdcontroller;

To complete the configuration, make sure you have configured SSL on the Controller as described in Controller SSL and Certificates.

Troubleshooting Controller Issues

On this page:

Installation Log Controller Server Log Performance Issues Monitor heap usage Timeout errors during Controller installation No data in the Metrics Browser Controller shutdown does not increase free memory on Linux

Copyright © AppDynamics 2012-2017 Page 76 Controller process unexpectedly shut down Controller server swapping too often Could not determine the IP address of this host error during installation Controller Cannot Connect to the MySQL Database Stack overflow exception when installing the Controller installation on Windows Triggering automatic collection of Controller logs Collecting Troubleshooting Information for the Controller

This page provides troubleshooting information for issues that may arise during Controller installation and operation.

Installation Log

A log for the installation process is automatically created in the /.install4j/installation.log. This file contains information for troubleshooting installation issues.

While installation is in progress, you can find the log file in the /i4j_log_.log.

Controller Server Log

The primary log file for the Controller at the following location:

/logs/server.log

The first step in troubleshooting Controller issues typically involves checking the log file. Search the log for errors that may correspond to the issue you are encountering. If found, an error log may help you identify and resolve the issue.

Performance Issues

If you observe degradation in Controller performance, it may be due to one of the following:

The hardware resources for the Controller might not match the correct Controller profile. The Controller performance profile may be incorrectly configured.

To identify Controller performance issues

1. Confirm that the hardware matches the Controller profile you use. For details see Controller System Requirements. 2. Confirm that your disk performance matches the recommended thresholds for minimum disk performance. For details see Cont roller System Requirements. 3. Conform that the Java SDK version is exactly the same as the Java version on the Controller. To display the version of Java used by the Controller: Open the command line utility. Go to /jre/bin Run java -version.

Monitor heap usage

On Windows, use the Task Manager to measure the memory usage for the Controller. On Linux, use the top command to get statistics for the memory data.

ps -elf (expect to see a "java" process and a "mysql" process)

top (expect to see java and mysql with cpu greater then 0)

Copyright © AppDynamics 2012-2017 Page 77 Timeout errors during Controller installation

While installing the Controller, the installer attempts to start up the Controller application server and database. At first database startup, the installer attempts to create the database schema, tables, and other artifacts needed by the Controller.

By default, the installer waits 45 minutes for the Controller app server or database to start. When installing a medium or large profile Controller or into certain types of environments such as virtual machines, the time it takes to start up the system can exceed the default startup timeout period. In this event, the installer aborts the installation and presents an error message indicating that the Controller could not be started in the allotted timeout window.

If you encounter this timeout error during installation, try increasing the timeout period. Pass a custom timeout value as a command-line argument to the installer in the following format:

-Vad-timeout-in-min=

The ad-timeout-in-min value sets the timeout for both starting and stopping the Controller processes.

If performing a silent install with the response file, add the ad-timeout-in-min parameter to the installation response file with values for the new timeout periods.

No data in the Metrics Browser

This may indicate that the agents are not correctly configured. Begin troubleshooting by looking at the server.log file.

All log files for Controller are located in the /logs folder.

Error Message Solution

Error receiving metrics (node not This error means the app agent tried to upload metric data for a specific node, but the node does properly modeled yet: Could not not belong to any tier. Nodes must belong to tiers and these tiers must belong to a business find component for node: application in order to receive metric data for that node. See Model the Application Environment.

Received Metric Registration This error indicates that the Controller received a registration request for metrics for a Machine request for a machine that is NOT Agent that listed a machine ID not yet associated with any node. Configure the Machine Agent to registered to any nodes. Sending associate with the correct application, tier, and node. See Install the Standalone Machine Agent. back null!

Agent upload blocked, as its The App Agents attempt to report metric data using Controller time. The agents retrieve the time reporting a time well into the from the Controller every five minutes and report times using a skew of the local machine time, if future. different.

If for some reason the App Agent reports metrics that are time-stamped ahead of the Controller time, the Controller rejects the metrics. To avoid this event, ensure that the system times for the machine on which the Controller is running and the machines for the app agents are in synchronization.

Controller shutdown does not increase free memory on Linux

You do not generally need to be concerned about the "free memory" value, as it will always trend towards zero. The Linux kernel tries to keep its cache as large as possible. As a result, the Linux kernel does not release the memory even after process termination. The memory is freed only if it is required by another process.

Controller process unexpectedly shut down

On Linux, memory allocation failures may cause the Controller process to be shut down unexpectedly by the Linux Out-of-Memory (OOM) Killer. The Controller log, server.log, does not provide information about the shut down. Instead, to diagnose this event, check the system log (usually /var/log/) for "out of memory" entries written by the OOM killer, for example, as follows:

Copyright © AppDynamics 2012-2017 Page 78 grep -i "Out of memory" /var/log/messages

If you encounter this log entry, make sure that you have allocated sufficient swap space on the Controller machine. AppDynamics recommends allocating a minimum of 10 GB of swap space.

Controller server swapping too often

If you encounter unexpected swapping on the Controller machine, you can configure how aggressive the operating system swaps by configuring the swappiness parameter. The swappiness parameter controls how often the Linux kernel moves processes out of physical memory and onto the swap disk. The default value for the parameter is usually 60. When you decrease the value, you lower the tendency of the operating system to swap. This results in less default file caching.

See the documentation for your Linux distribution for recommendations on the value for the swappiness parameter. For example, RedHat recommends setting swappiness to 10 for CentOS and RedHat kernels version 2.6.32-303 or later if you encounter OOM issues even though swap space is still available.

Before you configure the swappiness parameter though, ensure that the machine has sufficient RAM and that the buffer pool size for MySQL is properly configured.

To configure swappiness

1. Check the current value for swappiness.

/sbin/sysctl -a | grep swappiness

2. Set the swappiness parameter.

For example, add the following line to set the snappiness parameter to 10.

10 > /proc/sys/vm/swappiness

3. Set the swappiness parameter in the /etc/sysctl.conf file to the same value you used in step 2.

For example, add following line to the /etc/sysct1.conf file:

vm.swappiness = 10

Could not determine the IP address of this host error during installation

During the installation process, the installer attempts to ping the Controller by the host name or IP address you enter. If the ping is unsuccessful during the user input validation, the following error message appears: "Could not determine the IP address of this host. Please ensure that the IP address of the Controller host resolves to its hostname or to localhost. You may need to add an entry in the hosts file on the Controller host and retry the operation."

To make the hostname resolvable, add an entry for it to the hosts file on the machine on which you are installing the Controller. On Linux, the hosts file is typically at /etc/hosts. On Windows, look for the file at the following location, C:\Windows\System32\Drivers\etc\hosts, or the location appropriate for your version of Windows.

Add the entry in the form of the following example:

127.0.0.1 localhost myhostname

Use the IP address and hostnames appropriate for your system.

For example, the following shows the entry added as the third line of the default RedHat hosts file:

Copyright © AppDynamics 2012-2017 Page 79 127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 198.51.100.2 myhost myhost.example.org

Controller Cannot Connect to the MySQL Database

The following exception message in server.log file indicates that the Controller cannot connect to its embedded database.

*Server log exception:* "Caused by: java.net.ConnectException: Connection refused"

If you encounter this error, verify that the Controller database is running properly. On Linux, you can do so using one of the following commands:

Linux Windows Description

lsof -i:3388 SysInternals Process Explorer, will provide you list of List open files opened by process files with pid 3388. opened by process with pid 3388.

netstat -anp | grep netstat -ano | find "3388" List all networking ports opened by process 3388 with pid 3388.

ps -aef | grep mysql tasklist /v | find "mysql" Lists all processes and then checks if the process with name "mysql" is active and alive.

If no processes are found, it indicates that the Controller database was incorrectly terminated. Start the Controller database again and verify the Controller server.log file for any error messages.

Stack overflow exception when installing the Controller installation on Windows

This exception is usually caused when you set the -Xss option to a lower value. We recommend changing this value to 96000.

Triggering automatic collection of Controller logs

Use the following console commands to trigger automatic capture of Controller log files:

On Linux, run:

controller.sh zip-logs

On Windows, open an elevated command prompt (in the Windows start menu, right-click on the Command Prompt icon and choose Run as Administrator) and run:

controller.bat zip-logs

Collecting Troubleshooting Information for the Controller

Copyright © AppDynamics 2012-2017 Page 80 If opening a support case for Controller troubleshooting, you can facilitate diagnosis of the problem by providing the following information:

Submit all /logs/files, in particular the server.log files. You can also use the log file utility described in Triggeri ng automatic collection of Controller logs to collect logs. If the Controller runs out of memory, it generates a heap dump. Submit all files in /appserver/glassfish/domains/domain1/config/hprof. Submit all /appserver/glassfish/domains/domain1/config/gc.log files. Submit information about the hardware and operating system configuration of the machine that is currently hosting the Controller, including operating system, bit version, CPU cores, clock speed, disk configuration, and RAM. Indicate the Performance profile of Controller. See Controller Performance Profiles.

Uninstall the Controller

On this page:

Before Starting Uninstall the Controller

This page describes how to completely remove the Controller software and all associated files from the machine, use the uninstaller utility located in the Controller home directory.

Before Starting

If you have installed the Events Service with the Platform Administration Application, you need to uninstall Events Service before you uninstall the Controller. See Install the Events Service on Linux for more information.

Also, if you have the EUM Server, Application Analytics, or other product modules installed, keep in mind that you reinstall the Controller later, you will need to configure integration settings for the modules manually.

Optionally, stop the Controller before uninstalling as described in Start or Stop the Controller. If you do not stop the Controller, the uninstaller will do so for you. However, if your database or Controller generally take a long time to shut down, you can avoid the possibility of time-out errors during uninstallation by stopping the services manually.

Uninstall the Controller

1. Open a console: On Linux, open a window and switch to the user who installed the Controller or to a user with equivalent directory permissions. On Windows, open an elevated command prompt by right-clicking on the Command Prompt icon in the Windows Start Menu and choosing Run as Administrator. 2. From the command line, navigate to the Controller home directory. 3. Execute the uninstaller script to uninstall the Controller, as follows: On Linux:

uninstallController

On Windows:

run uninstallController.exe

To uninstall in quiet mode, add the -q option. For example:

Copyright © AppDynamics 2012-2017 Page 81 3.

./uninstallController -q

With this option, you do not need to interact with the installer to complete the removal.

Upgrade the Controller

On this page:

About the Upgrade Upgrade Order Before Upgrading Back up the Existing Controller Upgrading the Controller Troubleshooting the Upgrade

Related pages:

Controller Data Backup and Restore

To upgrade a Controller instance, you run the installer for the version of the Controller to which you want to upgrade on the Controller machine. The installer detects the Controller installation and upgrades that instance. If there is more than one Controller instance, the installer detects the last one installed.

About the Upgrade

You can upgrade across multiple versions at a time; that is, you do not need to run the installer individually for each intermediate version. If your existing Controller is handling your current load or you are not changing your licensing, you should not need to upgrade your hardware when upgrading the Controller. To determine if you are close to exceeding your Controller's capacity, see Contr oller Sizing FAQ. If you have a license for an older version, the license should work when upgrading the Controller to a new version. However, if you have a temporary license for the old version and now have a new license, the new license will not work on the old Controller. In this case, you should upgrade the Controller to the latest version before applying the license. An upgrade results in Controller downtime, but it is not necessary to stop agents during the Controller upgrade. If you have configured settings in the domain.xml file or other configuration files manually or by using the Glassfish asadmin utility, verify those changes in the configuration files after the upgrade and re-apply any customizations that were not preserved. After the upgrade, you can find backup copies of the files in the /backup directory.

Upgrade Order

Perform the upgrade in the following order if you did not use the Platform Administration Application to install the Events Service:

1. Upgrade the Events Service 2. Upgrade the EUM Server 3. Upgrade the Controller

Perform the upgrade in the following order if you used the Platform Administration Application to install the Events Service:

1. Shutdown the EUM Server 2. Upgrade the Controller 3. Upgrade the Events Service with the application. 4. Upgrade the EUM Server.

Note that upgrading platforms that use the Platform Administration Application results in downtime of the EUM Server.

Copyright © AppDynamics 2012-2017 Page 82 Before Upgrading

Review the latest Release Notes and the release notes for any intermediate versions between the current version of your instance and the version you are targeting. Check the most recent Controller System Requirements and Controller Sizing FAQ to determine whether you need to change your performance profile. If you changed any Glassfish settings that are not JVM options, note your changes. You may need to configure them after upgrade. The installer recognizes and retains many common customizations to the domain.xml, db.cnf, and other configuration files but is not guaranteed to retain them all. If you made changes to the files, verify the configuration after upgrading. The database user password storage mechanism changed in Release 3.8. If upgrading from 3.7.x or earlier, see the Upgrade the Controller page in the 3.8 documentation for considerations related to the database user password. If upgrading from pre-4.2.1.7 to 4.2.1.7 or later, ensure you install the latest HA Toolkit from https://github.com/Appdynamics/H A-toolkit/releases/latest.

Back up the Existing Controller

The installer retains agent data, reports, configuration and other types of data through an upgrade, including a copy of commonly modified configuration files under /backup. Nevertheless, to mitigate the risk of data loss in the event of an unexpected failure, be sure to back up the existing installation directory before applying an upgrade to the Controller.

If the upgrade does not finish successfully, use the backup files to restore the installation to its previous state. Do not attempt to upgrade again without restoring the installation. See Troubleshooting the Upgrade for more information.

To back up the Controller instance

1. Stop the Controller application server and database. For Linux, run:

/bin/controller.sh stop

For Windows, the Controller is installed as a service. Start and stop the Controller app server, database and reporting service from the services manager. To do so, from the Services viewer in Administrative Tools, find and stop the services identified as AppDynamics services. In particular, look for and stop if running: AppDynamics Controller Application Server AppDynamics Database AppDynamics Reporting Service 2. Back up the Controller home by copying the entire Controller home directory to a backup location. Note the following points: If the data home for the Controller is not under the Controller directory, be sure to back up the database directory as well. If it's not possible to back up the entire data set, you can selectively back up the most important tables. Use the Metadata Backup SQL script described and attached to the Controller Data Backup and Restore page.

Upgrading the Controller

The following steps describe how to upgrade the Controller on Windows and Linux.

You use the Controller installer in GUI mode, console mode, or silent mode to perform the upgrade. To use silent mode, pass the response file that the installer generated at first installation to the installer. You can find the response file in the following location

/.install4j/response.varfile

If you made any changes to the settings listed in the response file—such as to the connection port numbers, tenancy mode, or data directory—make those changes in the response file before starting the upgrade.

You must also add the existing root user password to the file, rootUserPassword and rootUserRePassword. If you do not, the installer prompts you to add the password.

To upgrade the Controller

1. Download the latest release from AppDynamics Download Center. If you prefer to use the Linux shell, see Downloading from the Linux Shell. 2.

Copyright © AppDynamics 2012-2017 Page 83 2. If they are running, stop the Controller application server and database. The Controller cannot be running during the upgrade. No data is collected from the time you shut down and start the new version of the Controller. For Linux, run:

/bin/controller.sh stop

For Windows, the Controller is installed as a service. Start and stop the Controller app server, database and reporting service from the services manager. To do so, from the Services viewer in Administrative Tools, find and stop the services identified as AppDynamics services. In particular, look for and stop if running: AppDynamics Controller Application Server AppDynamics Database AppDynamics Reporting Service 3. Launch the Controller installer. On Windows, be sure to start the Controller installer in an elevated command prompt. 4. Confirm that the installation directory is the same as the previous Controller installation directory on this machine. The Controller will automatically migrate the data only when the existing installation directory is specified. Once the installer detects the old version of the Controller in that location, it will perform an upgrade instead of doing a fresh install. The installer completes the upgrade and restarts the Controller. 5. If you made any manual changes to the configuration files listed, re-apply those changes to the equivalent files in the upgraded instance. The installer does not automatically propagate changes you have made to those files. 6. Open a browser and access the AppDynamics user interface:

http://:/controller

If the UI does not display the new Controller, refresh your browser cache to view the new UI.

To upgrade a Controller in HA mode with the HA Toolkit

If you have two AppDynamics Controllers deployed in a high availability arrangement, you must upgrade both the primary and the secondary Controller.

If administering high availability with the HA toolkit, follow these steps:

1. Download the latest release from AppDynamics Download Center. If you prefer to use the Linux shell, see Downloading from the Linux Shell. 2. On the secondary Controller, stop the database process (the application server process should already be stopped):

/sbin/appdservice appdcontroller-db stop

3. Stop the primary Controller application server and database. The Controller cannot be running during the upgrade. No data is collected from the time you shut down and start the new version of the Controller. a. For Linux, run:

/bin/controller.sh stop

For Windows, the Controller is installed as a service. Start and stop the Controller app server, database and reporting service from the services manager. To do so, from the Services viewer in Administrative Tools, find and stop the services identified as AppDynamics services. In particular, look for and stop if running: AppDynamics Controller Application Server AppDynamics Database AppDynamics Reporting Service 4. Launch the Controller installer. On Windows, be sure to start the Controller installer in an elevated command prompt. 5. Confirm that the installation directory is the same as the previous Controller installation directory on this machine. The Controller will automatically migrate the data only when the existing installation directory is specified. Once the installer detects the old version of the Controller in that location, it will perform an upgrade instead of doing a fresh install. The installer completes the upgrade and restarts the Controller. 6. If you made any manual changes to the configuration files listed, re-apply those changes to the equivalent files in the upgraded

Copyright © AppDynamics 2012-2017 Page 84 6. instance. The installer does not automatically propagate changes you have made to those files. 7. Open a browser and access the AppDynamics user interface:

http://:/controller

If the UI does not display the new Controller, refresh your browser cache to view the new UI. 8. Use the replicate.sh script to propagate changes from the primary to the secondary Controller. The steps are the same as they are for setting up replication for an existing Controller described in Using the High Availability (HA) Toolkit. Since you replicate from an existing Controller, an incremental (imperfect) replication is recommended to minimize downtime.

To upgrade a Controller in HA mode without the HA Toolkit

If you are not using the HA toolkit, follow the general steps below. It is recommended that you perform these steps in consultation with AppDynamics support:

1. Download the latest release from AppDynamics Download Center. If you prefer to use the Linux shell, see Downloading from the Linux Shell. 2. Stop the primary Controller application server and database. The Controller cannot be running during the upgrade. No data is collected from the time you shut down and start the new version of the Controller. For Linux, run:

/bin/controller.sh stop

For Windows, the Controller is installed as a service. Start and stop the Controller app server, database and reporting service from the services manager. To do so, from the Services viewer in Administrative Tools, find and stop the services identified as AppDynamics services. In particular, look for and stop if running: AppDynamics Controller Application Server AppDynamics Database AppDynamics Reporting Service 3. Launch the Controller installer. On Windows, be sure to start the Controller installer in an elevated command prompt. 4. Confirm that the installation directory is the same as the previous Controller installation directory on this machine. The Controller will automatically migrate the data only when the existing installation directory is specified. Once the installer detects the old version of the Controller in that location, it will perform an upgrade instead of doing a fresh install. The installer completes the upgrade and restarts the Controller. 5. If you made any manual changes to the configuration files listed, re-apply those changes to the equivalent files in the upgraded instance. The installer does not automatically propagate changes you have made to those files. 6. Open a browser and access the AppDynamics user interface:

http://:/controller

If the UI does not display the new Controller, refresh your browser cache to view the new UI. 7. Verify on the secondary Controller that all the data has been replicated correctly. a. Open a command line utility and go to the /bin directory. b. Log into the secondary Controller database: For Linux, run:

controller.sh login-db

For Windows, run the following command in an elevated command prompt (right-click on the Command Prompt icon in the Windows Start menu and choose Run as administrator):

controller.bat login-db

c.

Copyright © AppDynamics 2012-2017 Page 85 7.

c. Execute following command:

SHOW SLAVE STATUS\G

This step should provide you following result:

Seconds_Behind_Master: $Number_Of_Seconds_Behind_Master

If you get a non-zero number for this test, wait until the number becomes zero. 8. Perform a failover to the secondary Controller, so that the primary app server is shut down and the secondary app server is started. The commands for starting and stopping the app server are in Start or Stop the Controller. 9. Perform steps 1 through 7 for the secondary Controller.. 10. Shut down the secondary Controller app server and start the primary app server.

Both the primary and secondary Controllers are now upgraded.

Troubleshooting the Upgrade

If the upgrade does not succeed, the installer does not roll back changes on disk. Diagnose and troubleshoot the issue before reattempting the installation or upgrade.

To troubleshoot the issue, check the installation log at /.install4j/installation.log. You can also check temporary installer logs in the system tmp directory (on Linux: /tmp/i4j*; on Windows: %temp%\i4j*). The log files contains logs written up to the point the installation stopped. Controller log files may contain additional information.

After diagnosing the issue, manually revert the installation back to its state prior to starting the upgrade. Replace the existing Controller directory and data home directories with the backup directories you created before starting the upgrade. Then, apply your troubleshooting remediation changes and restart the update or installation.

A common upgrade issue involves the upgrade timing out.The installer attempts to restart the Controller and database after applying an update or installation. For large databases and depending on the system resources, this can take a considerable amount of time. If the Controller installer cannot finish starting up the Controller within a set time-out period (30 minutes by default), it exits the installation or upgrade.

You can increase the default time out period for system startup in the installer. Set the ad-timeout-in-min property to the new time in minutes in the response.varfile under Controller_home/.install4j directory or from the command line as an argument to the installer.

Install the Events Service

On this page:

About the Events Service Deployment Topology Overview Single Node Deployment Multi-Node Cluster Hybrid Deployment Default Ports

The topic describes the AppDynamics Events Service, the data store for on-premises Application Analytics, End User Monitoring (EUM) , and Database Monitoring deployments.

About the Events Service

The MySQL database embedded in the Controller stores application metric and configuration data generated by the Controller. While the MySQL database provides effective storage for this type of data, the high-volume, performance-intensive nature of analytics-based data requires dedicated, horizontally scalable storage, which is provided by the Events Service.

Copyright © AppDynamics 2012-2017 Page 86 If you are installing the server components for End User Monitoring, Application Analytics, or Database Monitoring, you need to use the Events Service.

Database Monitoring uses the instance of the Events Service embedded in the Controller by default. For data redundancy and storage scalability, however, you can configure Events Service storage for Database Monitoring.

If you are using multiple AppDynamics modules that depend on the Events Service, you should configure the modules to use the same Events Service instance or cluster.

If you already have a deployment made up of an on-premises Controller that uses the SaaS EUM Cloud, but now want to take advantage of User Analytics features, you need to install the on-premises EUM Server and configure it to connect to the on-premises Events Service. For more information, see Configure the EUM Server.

Deployment Topology Overview

You can deploy the Events Service to a single node or cluster. A cluster deployment is made up of at least three nodes. Clusters are horizontally scalable, so that you can add nodes to the cluster as your data requirements grow. A cluster also enables you to utilize data replication, which helps to ensure data integrity in the event of a node failure.

The Controller includes an embedded Events Service instance used by the Database Monitoring product. The embedded Events Service is not meant to be used with production Application Analytics or EUM installations, because it runs on the same machine as the Controller and does not offer data replication or scalability. It may be used for small-scale environments, especially those intended for demonstration or learning purposes. Note, however, that you can not migrate data from the embedded Events Service to a standalone Events Service if you upgrade later.

Single Node Deployment

In a single-node Events Service deployment, the Controller and other Events Service clients can connect directly to the Events Service node or through a load balancer. Deploying a single node Events Service behind a load balancer allows you to grow the deployment to a multi-node cluster easily, without having to modify the clients.

Multi-Node Cluster

A multi-node cluster is made up of three or more nodes. With a cluster, the Controller and other Events Service clients, the EUM Server and Analytics Agent connect to the Events Service through a load balancer, which distributes load to the Events Service cluster members. In a single node deployment, connect through a load balancer or directly to the Events Service.

Copyright © AppDynamics 2012-2017 Page 87 The nodes in a cluster swap a large amount of data. For this reason, when deploying a cluster, make sure to install all cluster nodes within the same local network, ideally, attached to the same network switch.

Hybrid Deployment

In most deployments, the backend platform components of the AppDynamics Application Intelligence Platform—the Controller, Events Service and EUM Server—would all reside either entirely on-premises or entirely in the cloud (as AppDynamics-hosted SaaS components).

A hybrid deployment, however, uses a mix of the hosting models. It consists of an on-premises Controller and Events Service (for Transaction Analytics and Log Analytics data), along with a SaaS-hosted EUM Cloud and Events Service for storing and serving EUM data.

While such a hybrid deployment is possible, certain limitations exists. The following features are not available in a hybrid deployment:

Analytics metrics for EUM data sets. (See Creating Metrics From Analytics Searches for more about Analytics metrics.) Analytics API key creation and query API access for EUM data sets.

Default Ports

The default ports used by the Events Service are:

Events Service API Store Port: 9080 Events Service API Store Admin Port: 9081

The Events Service cluster members use additional ports for internal communication among the cluster members. All the ports used within the cluster are listed in the Events Service configuration file, conf/events-service-api-store.properties.

Events Service Sizing and Capacity Planning

Copyright © AppDynamics 2012-2017 Page 88 On this page:

About Hardware Capacity and Resource Planning General Recommendations Events Service Node Sizing Based on License Units Sizing for EUM and Database Monitoring Analytics

This page contains information to help you estimate the hardware resource requirements for your Events Service deployment.

About Hardware Capacity and Resource Planning

When estimating your hardware requirements, the first step is to determine the event ingestion rate (for transaction analytics) or the amount of data being indexed (for log analytics). This helps you to determine the number of analytics license units you will need.

Once you determine your license units requirements, it is important to consider other factors that affect the hardware capacity, such as the processing load of queries run against the Events Service and the actual type of hardware used. A physical server is likely to perform better than a virtual machine. You should also take into account seasonal or daily spikes in activity in your monitored environment in your considerations.

An event is the basic unit of data in the events service. In terms of application performance management, a transaction analytics event corresponds to a call received at a tier. A business transaction instance that crosses three tiers, therefore, would result in three events being generated. In application performance management metrics, the number of business transaction instances is reflected by the number of calls metric for the overall application. In End User Monitoring, each page view equates to an event, as does each Ajax request, network request, or crash report.

General Recommendations

Solid state drives (SSD) can significantly outperform hard disk drives (HDD), and are therefore recommended for production deployments. Ideally, the disk size should be 1 TB or above. All machines in your cluster should have identical hardware specifications. For heap space allocation, AppDynamics recommends allocating half of the available RAM to the Events Service process, with a minimum of 7 GB up to 31 GB. When testing the events ingestion rate in your environment, it is important to understand that events are batched. Ingestion rates observed at the scale of a minute or two may not reflect the overall ingestion rate. For best results, observe ingestion rate over an extended period of time, several days at least. Carefully monitor the health of your cluster and grow the cluster as needed.

Events Service Node Sizing Based on License Units

The data in this section can help you plan your hardware requirements. It describes recommended hardware configurations (in the context of Amazon EC2 instance types) corresponding to the number of license entitlement units for log and transaction analytics. See License Entitlements and Restrictions for details about license units for log and transaction analytics.

The hardware shown for each license amount represents the hardware capacity of a theoretical combined load of both transaction analytics and log analytics events. The numbers used were derived from actual tests that were performed with an uncombined load, from which the following numbers were extrapolated. Note that the test conditions did not include query load and so may not be representative of a true production analytics environment.

The following table shows sizing recommendations and describes the size of the cluster used for testing. This does not mean you are limited to a seven-node event service. If you need to go beyond seven nodes, contact your AppDynamics account representative to ensure proper sizing for your specific environment.

Event Type AWS Machine Instance Type

i2.2xlarge (61 GB RAM, 8 vCPU, 1600 i2.4xlarge (122 GB RAM, 16 vCPU, 3200 i2.8xlarge (244 GB RAM, 32 vCPU, 6400 GB SSD) GB SSD) GB SSD)

1 nod 3 node 5 node 7 node 1 nod 3 node 5 node 7 node 1 node 3 nodes 5 nodes e s s s e s s s

Transaction analytics 20 37 44 63 22 41 84 113 53 94 120 license units

Copyright © AppDynamics 2012-2017 Page 89 Log analytics license units 7 10 17 19 16 19 32 44 39 116 270

The following points describe the test conditions under which the license units-to-hardware profile mappings in the table were generated:

Average log event size in bytes: 350 Average size of business transaction event: 1 KB Tiers in business transaction: 3

The tests were conducted on virtual hardware and programmatically generated work load. Real world work loads may vary. To best estimate your hardware sizing requirements, carefully consider the traffic patterns in your application and test the Events Service in a test environment that closely resembles your production application and user activity.

Sizing for EUM and Database Monitoring Analytics

End User Monitoring and Database Monitoring include analytics-related features that can also send data to the Events Service. In End User Monitoring, each page view equates to an event, as does each Ajax request, network request, or crash report. There can be a few dozen Ajax requests for every page load.

In general, the ingestion capacity and sizing profile for EUM or Database Monitoring analytics events are equivalent to that for Log Analytics, with the raw events size being about 2 kilobytes on average.

Install the Events Service on Linux

On this page:

Getting Started with the Platform Administration Application Events Service Requirements Port Settings Configure SSH Passwordless Login Tune the Operating System for Production Cluster Nodes Install the Events Service Cluster Administering the Cluster Expanding the Cluster Removing Events Service Monitoring Cluster Node Health Collecting Events Service Logs Changing the SSH Key File Upgrading the Events Service

The AppDynamics Platform Administration Application automates the task of installing and administering an Events Service deployment on Linux platforms.

You can use the Platform Administration Application to install the Events Service as a 1-node deployment or as an Events Service cluster of 3 or more nodes.

This page describes how to install the Events Service for Linux with the Platform Administration Application.

Getting Started with the Platform Administration Application

The Platform Administration Application is automatically installed with the Controller on Linux systems. It is located in the platform_a dmin directory in your Controller home directory.

The primary entry point for the application is the platform-admin.sh script in the platform_admin/bin directory.

To see the operations available for the Platform Administration application, from a command terminal navigate to the Controller home directory and run the script with the -h switch, as follows:

Copyright © AppDynamics 2012-2017 Page 90 bin/platform-admin.sh -h

You can view the format of each operation by specifying the operation with the -h argument, for example:

bin/platform-admin.sh install-events-service -h

The Platform Administration Application runs as a separate process from the Controller. Certain commands pertain to working with the Platform Administration Application itself rather than to Events Service nodes, including commands to start and stop the Platform Administration Application process. These include:

start-platform-admin stop-platform-admin show-platform-admin-version

The Platform Administration Application prevents multiple users from running commands at the same time. If a second user attempts to run a command while another command is in progress, the second command is not completed and an error message appears indicating that another command is in progress. To avoid such conflicts, the application should generally be used by a single user at a time.

Events Service Requirements

Install the Events Service on appropriately sized hardware. The Platform Administration Application checks the target system for minimum hardware requirements. For more information on these requirements, see the description of the profile argument to the Events Service install command in Install the Events Service Cluster. For a production deployment or high volume testing environment, see detailed sizing information in Events Service Sizing and Capacity Planning. The Events Service must run on a dedicated machine. The machine should not run other applications or processes not related to Events Service. The Controller and Events Service must reside on the same local network and communicate by internal network. Do not deploy the cluster to nodes on different networks, whether relative to each other or to the Controller where the Platform Administration Application runs. When identifying cluster hosts in the configuration, you will need to use the internal DNS name or IP address of the host, not the externally routable DNS name. For example, in terms of an AWS deployment, use the private IP address such as 172.31.2.19 rather than public DNS hostname such as ec2-34-201-129-89.us-west-2.compute.amazonaws.com. Linux is required for the cluster and Controller machines. The versions of Linux supported include the flavors and versions supported by the Controller, as indicated by Controller System Requirements. Cluster machines must have identical directory structures and the same user account and equivalent hardware profiles. SSH must be installed on all machines and the cluster machines must permit key-based SSH access (instructions below). Make sure that the appropriate ports on each Events Service host are open. See Port Settings for more information. The Events Service should normally operate behind a load balancer. The Platform Administration Application automatically configures a direct connection from the Controller to the Events Service node. If you deploy a cluster, the first master node is automatically configured as the connection point in the Controller. You will need to reconfigure the Controller to connect through the load balancer VIP after installation, as described below. For general information, see Install the Events Service. For sample configurations, see Load Balance Events Service Traffic. Before starting, be sure to review the Release Notes for known issues and late breaking information on using the AppDynamics Platform Administration Application.

Port Settings

On each machine, the following ports need to be accessible to external (outside the cluster) traffic:

Events Service API Store Port: 9080 Events Service API Store Admin Port: 9081

For a cluster, ensure that the following ports are open for communication between machines within the cluster. Typically, this requires configuring iptables or OS-level firewall software on each machine to open the ports listed

9300 – 9400

Copyright © AppDynamics 2012-2017 Page 91 The following shows an example of iptables commands to configure the the operating system firewall:

-A INPUT -m state --state NEW -m tcp -p tcp --dport 9080 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 9081 -j ACCEPT -A INPUT -m state --state NEW -m multiport -p tcp --dports 9300:9400 -j ACCEPT

If a port on the Events Service node is blocked, the Events Service installation command will fail for the node and the Platform Administration Application command output and logs will include an error message similar to the following:

failed on host: with message: Uri [http://localhost:9080/_ping] is un-pingable.

If you see this error, make sure that the ports indicated in this section are available to other cluster nodes.

Configure SSH Passwordless Login

The Platform Administration Application needs to be able to access each cluster machine using passwordless SSH. Before starting, enabled key-based SSH access as described here.

This setup involves generating a key pair on the Controller host and adding the Controller's public key as an authorized key on the cluster nodes. The following steps take you through the configuration procedure for an example scenario. You will need to adjust the steps based on your environment.

If you are using EC2 instances on AWS, the following steps are taken care of for you when you provision the EC2 hosts. At that time, you are prompted for your PEM file, which causes the public key for the PEM file to be copied to the authorized_keys of the hosts. You can skip these steps in this case.

On the Controller machine, follow these steps:

1. Log in to the Controller machine or switch to the user you will use to perform the deployment:

su - $USER

2. Create a directory for SSH artifacts (if it doesn't already exist) and set permissions on the directory, as follows:

mkdir -p ~/.ssh chmod 700 ~/.ssh

3. Change to the directory:

cd .ssh

4. Generate PEM public and private keys in RSA format:

ssh-keygen -t rsa -b 2048 -v

The key file must not use password protection.

5.

Copyright © AppDynamics 2012-2017 Page 92 5. Enter a name for the file in which to save the key when prompted, such as apps-analytics.

6. Rename the key file by adding the .pem extension:

mv appd-analytics appd-analytics.pem

You will later configure the path to it as the sshKeyFile setting in the Platform Administration Application configuration file, as described in Deploying an Events Service Cluster. 7. Transfer a copy of the public key to the cluster machines. For example, you can use scp to perform the transfer as follows:

scp ~/.ssh/myserver.pub host1:/tmp scp ~/.ssh/myserver.pub host2:/tmp scp ~/.ssh/myserver.pub host3:/tmp

The first time you connect you may need to confirm the connection to add the cluster machine to the list of known hosts and enter the user's password. 8. On each cluster node (host1, host2, and host3), create the .ssh directory in the user home directory, if not already there, and add the public key you just copied as an authorized key:

cat /tmp/appd-analytics.pub >> .ssh/authorized_keys chmod 600 ~/.ssh/authorized_keys

9. Test the configuration from the Controller machine by trying to log in to a cluster node by ssh:

ssh host1

If unable to connect, make sure that the cluster machines have the openssh-server package installed and that you have modified the operating system firewall rules to accept SSH connections. If successful, you can use the Platform Administration Application on the Controller host to deploy the Events Service cluster, as described next.

If the Platform Administration Application attempts to install the Events Service on a node for which passwordless SSH is not properly configured, you will see the following error message:

./bin/platform-admin.sh install-events-service --ssh-key-file /root/e2e-demo.pem --remote-user username --installation-dir /home/username/ --hosts 172.31.57.202 172.31.57.203 172.31.57.204 ... Events Service installation failed. Task: Copying JRE to the remote host failed on host: 172.31.57.204 with message: Failed to upload file: java.net.ConnectException: Connection timed out

If you encounter this error, use the instructions in this section to double check your passwordless SSH configuration.

Tune the Operating System for Production Cluster Nodes

After installing the Events Service cluster, as described in the following section, you will run a script on each node that tunes the operating system settings for best performance. There are a few additional manual changes you need to make beforehand, as described here. These are particularly relevant for production Events Service deployments.

On each node in the cluster, make these configuration changes:

Copyright © AppDynamics 2012-2017 Page 93 1. Using a text editor, open /etc/sysctl.conf and add the following:

vm.max_map_count=262144

2. Raise the open file descriptor limit in /etc/security/limits.conf, as follows:

soft nofile 96000 hard nofile 96000

Replace username_running_eventsservice with the username under which the Events Service processes run. So if you are running Analytics as the user appduser, you would use that name as the first entry.

Install the Events Service Cluster

Before starting, make sure you have enabled passwordless SSH access on the node machines and tuned the environment, as described above.

To deploy the Events Service:

1. If it is not already started, start the Controller. 2. From the command line on the Controller machine, go to the /platform_admin directory. Notice the events-service.zip file in the Platform Administration Application home directory. This contains the Events Service software that will be deployed to cluster nodes. 3. Start the Platform Administration Application by running this command:

bin/platform-admin.sh start-platform-admin

4. Optionally, create a hosts file with the addresses of the node hosts. The file should contain the internal DNS hostnames or IP addresses of the nodes to add. Each IP address or hostname should be on its own line in the file. For example:

host1 host2 host3

Instead of creating a hosts file, you can pass the host URLs as command-line arguments. 5. Create the installation directory for your events service on each node in the cluster. The installation-dir argument is required in the installation command and you will see an error if it doesn't exist when you run the command. 6. Run the installation command, passing cluster configuration settings as arguments to the command. The format for the command is the following:

bin/platform-admin.sh install-events-service --profile dev --hosts host1 host2 host3 --remote-user --ssh-key-file “” --data-dir “” --installation-dir “

Copyright © AppDynamics 2012-2017 Page 94 6.

Arguments are: installation-dir: Mandatory. The directory in which the Platform Administration Application will install the Events Service software on the cluster hosts. data-dir: The directory where Events Service will store its data. If not specified, the default is a directory named dat a under the installation-dir location. remote-user: Mandatory. The user name of the user account on the cluster hosts that the Platform Administration Application will use to install the Events Service. As shown above, do not use quotes around the username value. hosts: Use this argument or host-file to specify the internal DNS hostnames or IP addresses of the cluster hosts in your deployment. With this argument, pass the hostnames or addresses as command line parameters. For example:

--hosts 192.168.32.105 192.168.32.106 192.168.32.107

host-file: As an alternative to specifying hosts as hosts arguments, pass them as a list in a text file you specify with this argument. Specify the internal DNS hostname or IP address for each cluster host as a separate line in the plain text file. profile: By default (with profile not specified), the installation is considered a production installation. Specifying a developer profile (profile dev) directs the Platform Administration Application to use a reduced hardware profile requirement, suitable for non-production environments only, for the target machine. For a dev profile, the Platform Administration Application checks for 1 core CPU, 1 GB RAM and 2 GB disk space. Otherwise, the Platform Administration Application requires a 4 core CPU, 12 GB RAM, and 128 GB of disk space. ssh-key-file: The path to the SSH key file used for passwordless access the cluster hosts.

For example:

bin/platform-admin.sh install-events-service --profile dev --hosts ip-172-31-20-21.us-west-2.compute.internal ip-172-31-20-22.us-west-2.compute.internal ip-172-31-20-23.us-west-2.compute.internal remote-user=“root” ssh-key-file=“~/.ssh/appd-analytics.pem.txt” --installation-dir “

If using a hosts text file, use the following:

bin/platform-admin.sh install-events-service remote-user=“root” ssh-key-file=“~/.ssh/appd-analytics.pem.txt” --installation-dir “” --host-file=/home/appduser/hosts.txt

7. Log in to each node machine, and run the script for setting up the environment as follows: a. Add execute permission to the tune-system.sh script:

chmod +x tune-system.sh ./tune-system.sh

b. Run the script:

Copyright © AppDynamics 2012-2017 Page 95 7.

b.

sudo /events-service/latest/bin/tool/tune-sys tem.sh

8. By default, the Platform Administration Application configures the Events Service connection in the Controller to refer to the first master node defined in the cluster. If you are using a load balancer, as recommended, you need to change this Controller setting to point to the Events Service VIP as presented at the load balancer instead, as follows: a. Open the Administration Console. b. In the Controller settings pane, find appdynamics.analytics.server.store.url. c. Change the value of the setting to the URL to the VIP for the Events Service at the load balancer. d. By default, Database Monitoring stores events data in the Events Service embedded in the Controller. To have it use the Events Service you just deployed: i. Change the appdynamics.analytics.local.store.url to the VIP URL as well. ii. Make sure that appdynamics.analytics.local.store.controller.key has the same value as appdynamics.analytics.server.store.controller.key. Note that only newly generated Database Monitoring data will be stored in the Events Service; previously collected data will remain in the embedded Events Service instance unless it is migrated to the new Events Service.

It may take a few minutes for the Controller and Events Service to synchronize account information after you modify connection settings in the console. 9. Set up load balancing. See Load Balance Events Service Traffic for information about configuring the load balancer.

Administering the Cluster

You can perform administration tasks for the Events Service nodes through the Platform Administration Application. You should not need to access the cluster node machines directly once they are deployed. In particular, you should not attempt to use the script files in the events-service/bin directories.

The Platform Administration Application must be running to install or administer Events Service nodes. Command for stopping, starting, and checking the version of the Platform Administration Application are:

start-platform-admin stop-platform-admin show-platform-admin-version

With the Platform Administration Application is running, use these command to administer an existing Events Service deployment:

restart-events-service – Restarts Events Service on all nodes in the cluster. This restarts Events Service processes only, not the host machine itself. restart-events-service-node --node – Restarts Events Service processes on one or more identified nodes in the cluster. list-events-service-nodes – Lists the nodes in the current cluster

Expanding the Cluster

The Events Service cluster is horizontally scalable. To grow your existing deployment to contend with increased workload, simply add nodes.

This section describes how to add nodes to a deployed cluster. Before starting, prepare the new cluster machine. Verify system requirements and prepare the environment, as described in Setting Up the Environment.

It is important for any new machine in the cluster to have the same SSH-enabled user account as existing cluster members.

Once you have prepared the system, run the command for adding nodes:

Copyright © AppDynamics 2012-2017 Page 96 bin/platform-admin.sh add-events-service-nodes --hosts host1 host2 host3

Alternatively, pass the hostnames of the new node as a file.

bin/platform-admin.sh add-events-service-nodes --host-file=/home/appduser/hosts.txt

The file you pass to the command (hosts.txt in the example) should contain the internal DNS hostnames or IP addresses of the nodes to add. It does not need to list existing nodes in the cluster.

Be sure to modify your load balancer rules to include the new cluster member in its routing rules. See Load Balance Events Service Traffic for more information.

Removing Events Service

You can remove Events Service from individual nodes or all at once.

The uninstall-events-service command removes the Events Service software and data from all cluster nodes. The Controller and Events Service share a database, so if you are uninstalling the Controller instance under which you ran the Platform Administration Application to install the Events Service, you need to uninstall the Events Service with this command before you uninstall the Controller.

The remove-events-service-node command removes the Events Service software and data from a single node that you specify by hostname. You should only use this command if you have at least four nodes in your cluster. Removing an Events Service node from a three-node cluster is not supported. Identify the node to remove using the --node command line parameter.

After removing a node, be sure to adjust your load balancer rules to remove the old cluster member. See Load Balance Events Service Traffic for more information.

If you are not using a load balancer with a cluster deployment, keep in mind that the connection settings for the first master node that reports to the Controller at installation time are written to the Controller setting that identifies the Events Service to the Controller. If you remove a master node in that case, check whether the removed master node is node identified as the Events Service destination URL in the Controller connection settings (e.g., appdynamics.analytics.server.store.url) and adjust the setting if so. See Connec t to the Events Service for more information.

Note the following guidelines:

You cannot remove nodes such that the resulting cluster size is two A cluster that consists of three or more nodes can't be reduced in size to a single node Events Service.

This command removes the node specified in the argument:

bin/platform-admin.sh remove-events-service-node --node 10.0.100.51

Respond when prompted whether you want to remove the data along with the Events Service software.

If you attempt to remove a master node using the command shown above, the Platform Administration Application notifies you that you are attempting to remove a master node and cancels the operation. As indicated in the output, you can proceed to remove the master node by rerunning the command with the -f force flag. When you remove a master node, the cluster elects a new master node from the existing data nodes. The election process may take a few seconds, during which new events cannot be processed. Be sure to perform this operation at a time when the impact of a brief interruption of service will be minimal.

After uninstalling Events Service, the only trace of the Events Service remaining on the host may be a file named orcha-modules.log. It appears in the logs directory at the former installation root directory. To remove all traces of the Events Service, manually remove the log file after removing the Events Service with the Platform Administration Application.

Copyright © AppDynamics 2012-2017 Page 97 Monitoring Cluster Node Health

It is important to carefully monitor the health of the Events Service cluster, for a new deployment, especially to monitor disk consumption.

You can check the status of the cluster from the Controller machine using this command:

bin/platform-admin.sh show-events-service-health

The output shows possible issues and the steps you need to take to resolve them. For example, if available disk is low, the resolution is to add nodes to the cluster. The following are the potential errors and remediation steps:

Error Explanation Remediation

Cluster out of capacity If the heap size of any Events Service Java Add Events Service nodes process exceeds 80% utilization

Disk size remaining drops The disk size of the identified node dropped Add Events Service nodes below 30% below 30%

Events Service is not reachable The Events Service process on the Restart the node but the host is reachable identified node is not functioning properly

Machine is not reachable The machine may be down, disconnected, Try restarting the machine; if it continues, the node or suffering failure may need to be removed from the cluster

Cluster needs restart A condition has been identified that Restart the cluster requires cluster restart

Cluster size is 2 Events Service cluster requires more than Add a node two nodes

Collecting Events Service Logs

The Platform Administration Application can collect logs from the nodes in the cluster. The following command retrieves node logs and bundles them, along with the Platform Administration Application's own logs, into a ZIP file in the Platform Administration Application home directory:

bin/platform-admin.sh retrieve-events-service-logs

When the command is finished, a ZIP file named events-service.log.zip is created in the location from which you ran the script. You can then extract the archive to troubleshoot, or submit the archive for troubleshooting assistance to your AppDynamics representative. If the Platform Administration Application failed to connect to one of the cluster nodes to retrieve logs for any reason, the connection error is written to a log file included in the archive.

Changing the SSH Key File

After initial installation, you may need up update the PEM file that gives the Platform Administration Application access to node machines.

You can do so by creating the PEM file, as described in Configure SSH Passwordless Login, and using the following command to install the new PEM file.

bin/platform-admin.sh set-user-credentials --ssh-key-file newkeyfile.pem

Copyright © AppDynamics 2012-2017 Page 98 The change takes effect immediately.

Upgrading the Events Service

You can use the Platform Administration Application to perform a rolling upgrade of the Events Service software on deployed nodes.

The general steps for upgrading an Events Service deployment are as follows:

1. Upgrade the Controller. (See Upgrade the Controller.) 2. Apply the upgrade to the Events Service nodes using the following command:

bin/platform-admin.sh upgrade-events-service

The Platform Administration Application checks whether the Events Service is up to date relative to the current Controller version and, if not, performs the update.

Install the Events Service on Windows

On this page:

Events Service Cluster Architecture About Master Nodes Requirements Deploying an Events Service Node (1 Node) Deploying an Events Service Cluster (3 Nodes) Expanding a Cluster (4+ Nodes) Monitoring Cluster Node Health Removing Nodes from a Cluster Starting and Stopping the Events Service Uninstalling Events Service as a Windows Service

This page describes how to install and administer the Events Service as a single node or cluster on Microsoft Windows systems. The instructions on this page are organized by deployment scenario, as follows:

Installing a single-node events service. A single Events Service instance may be useful for demonstration purposes or other scenarios where data redundancy and high availability are not required. Installing a three-node cluster. A three-node deployment is the minimum size for an Events Service cluster. Adding nodes to a three-node cluster. You can scale up from a three-node cluster by adding additional nodes.

In addition, it includes information on administering the Events Service, starting and stopping and so on.

Events Service Cluster Architecture

The following diagram depicts the general features and connection points in a cluster deployment. Notice that a load balancer is required for a cluster.

Copyright © AppDynamics 2012-2017 Page 99 About Master Nodes

As described in the installation steps in the sections below, the exact configuration values you'll use depend on whether you are deploying a single node installation, a three-node cluster, or the fourth or higher node of a multi-node cluster.

Primarily the differences have to do with the role of the master node in the cluster. A master node in the Events Service deployment is a node which—in addition to acting as a storage node—manages the state of the data across the cluster. In a single node deployment, there's not much for the master to do. But in a multi-node cluster, the master manages the state across the entire cluster, including the state of the replica. The first node that starts up acts as the master. If the master node becomes unavailable for any reason, the remaining nodes need to elect a new master.

The configuration for each node needs to specify:

Whether this node is permitted to serve as a master, and The minimum number of nodes that need to be available in the cluster for a new master to be elected (minimum master nodes).

The differences by node order are:

Cluster node number Master-enabled? Minimum master nodes

Only node in a one-node installation true 1

First, second, and third nodes of a three-node cluster true 2

Fourth (or higher) node of a cluster false 2

Requirements

Copyright © AppDynamics 2012-2017 Page 100 The Events Service process must run on a dedicated machine. The Controller and Events Service must reside on the same local network and communicate by internal network. Do not deploy the cluster to nodes on different networks, whether relative to each other or to the Controller. Java 1.7 must be installed, with the JAVA_HOME variable pointing to the location of the Java 1.7 JRE.

Deploying an Events Service Node (1 Node)

To install the Events Service on a single node:

1. Download the Events Service distribution from the AppDynamics download center to the machine on which you are installing the Events Service. 2. Unzip the Events Service distribution archive to a directory on the target host. This creates the events-service directory with the Events Service artifacts. You will need to configure a few setting in the Events Service download. But first, configure the Controller to connect to the Events Service and get the connection key, as described next. 3. Configure the connection to the Events Service in the Controller, and get the Controller key for the Events Service configuration as follows: a. With the Controller running, open the Administration Console as the root user. b. In the Controller Settings page, search for appdynamics.analytics.server.store.url. c. Replace the default value to the internal hostname for the Events Service machine and default Events Service listen port, 9080. If you are putting a load balancer in front of the Events Service (required for a cluster), this will be the VIP for the Events Service as exposed at the load balancer. In this case, it is likely you will need to return to this step after you finish deploying the node and configuring the load balancer. d. Search for an additional setting, one you will need to enable the connection from the Events Service to the Controller, appdynamics.analytics.server.store.controller.key. e. Copy the value of the property to your clipboard. You will need configure this in the Events Service properties file next. 4. Configure the connection from the Events Service to the Controller: a. In a terminal, navigate to the events-service directory:

cd events-service

b. Open the conf\events-store-api-store.properties file for editing c. Find the following property and replace controller-key with the copied key:

ad.accountmanager.key.controller=controller-key

5. Scroll to the end of the events-store-api-store.properties file and verify the minimum and maximum settings for Events Service process heap allocation. For a demonstration installation, you can retain the default of 1 GB. A production installation requires 8 GB:

ad.jvm.heap.min=1g ad.jvm.heap.max=1g

For the setting value, use g for gigabyte (GB) and m for megabyte (MB). 6. Save and close the events-store-api-store.properties file. 7. Install the Events Service as a Windows service:

bin\events-service.exe service-install -p conf\events-service-api-store.properties --auto-start

The optional auto-start flag causes the Events Service to be installed as an automatically started service. If you do not include the flag, the Events Service is installed as a manually started service. An additional option, log-verbose, increases the

Copyright © AppDynamics 2012-2017 Page 101 7.

verbosity of installation and operation logging, which is useful for troubleshooting. 8. Enter the following command to find the service name for the Events Service:

bin\events-service.exe service-list

9. Pass the service name returned by service-list command as the -s parameter argument in the following command:

bin\events-service.exe service-start -s ""

Be sure to enclose the name in double quotes. 10. Check the health of the new node.

bin\events-service.exe check-health -hp localhost:9081

For the port, pass the administration port for the Events Service, 9081 by default. Verify that "Healthy" appears as the service status, indicating that the process is operating normally:

[appduser@controller-one events-service]$ bin/events-service.exe check-health -hp 192.168.33.22:9081 [2015-12-09T18:30:45,342-08:00] HV000001: Hibernate Validator 5.0.2.Final [2015-12-09T18:30:45,956-08:00] Individual statuses below: [2015-12-09T18:30:45,956-08:00] [192.168.33.22:9081] status is [200 OK] [2015-12-09T18:30:45,956-08:00] Overall status Healthy ...

11. Configure the connections from the Analytics Agent, EUM Server, or Database Monitoring agents to the Events Service, as described in Connect to the Events Service.

Deploying an Events Service Cluster (3 Nodes)

The following steps describe how to deploy an Events Service cluster made up of three nodes, the minimum size of the Events Service cluster. These steps apply whether you are performing a new installation of a cluster or expanding a single node deployment into a three-node cluster. For information on expanding beyond a three node cluster, see Adding Nodes to a Cluster.

Before starting, review the topology notes in Install the Events Service and make sure that all machines in the cluster meet the system requirements.

When ready, perform these steps on each of the three nodes.

1. Follow the steps for configuring a single node cluster in the 1-node installation above. Additionally, configure the following settings in the conf\events-store-api-store.properties file: a. Change the value of the ad.es.node.minimum_master_nodes property to two:

ad.es.node.minimum_master_nodes=2

The setting specifies the minimum number of master-eligible instances that must be available in order to elect a new master. Since an Events Service cluster has three master nodes, this value should be two for a cluster. b.

Copyright © AppDynamics 2012-2017 Page 102 1.

b. Set the value of ad.es.event.index.shards= the number of nodes, in this case three:

ad.es.event.index.shards=3

As of 4.2.1, it is unnecessary to change this value if it is already higher than the number of nodes. c. Set the replication factor to 1 by changing the ad.es.event.index.replicas and ad.es.metadata.replicas properties, as follows:

ad.es.event.index.replicas=1 ad.es.event.index.hotLifespanDays=10 ad.es.event.index.warmLifespanDays=0

ad.es.metadata.replicas=1

d. For the unicast hosts property, add the hostname or IP address, along with the port range 9300-9400, for each node in the cluster:

ad.es.node.unicast.hosts=node1.example.com[9300-9400],node 2.example.com[9300-9400],node3.example.com[9300-9400]

e. Change the publish host to the IP address or hostname of this machine. For example:

ad.es.node.network.publish.host=node2.example.com

f. Allocate heap space for the process. To set it to 8 GB, for

ad.jvm.options.name=events-service.vmoptions ad.jvm.heap.min=8g ad.jvm.heap.max=8g

Set the heap size to half of the size of available RAM on the system, up to a maximum of 31 GB. For the setting value, g indicates gigabyte (GB), and m indicates megabyte (MB). g. Save and close the file. 2. Install the Events Service as a Windows service:

bin\events-service.exe service-install -p conf\events-service-api-store.properties --auto-start

The optional auto-start flag causes the Events Service to be installed as an automatically started service. If you do not include the flag, the Events Service is installed as a manually started service. An additional option, log-verbose, increases the verbosity of installation and operation logging, which is useful for troubleshooting. 3. Enter the following command to find the service name for the Events Service:

bin\events-service.exe service-list

4.

Copyright © AppDynamics 2012-2017 Page 103 4. Pass the service name returned by the service-list command as the -s parameter argument in the following command:

bin\events-service.exe service-start -s ""

Be sure to enclose the name in double quotes. 5. Check the health of the new node:

bin\events-service.exe check-health -hp localhost:9081

Note: At least two nodes must be running before you run the command.

For the port, pass the administration port for the Events Service, 9081 by default. Verify that "Healthy" appears as the service status, indicating that the process is operating normally:

[appduser@controller-one events-service]$ bin/events-service.exe check-health -hp 192.168.33.22:9081 [2015-12-09T18:30:45,342-08:00] HV000001: Hibernate Validator 5.0.2.Final [2015-12-09T18:30:45,956-08:00] Individual statuses below: [2015-12-09T18:30:45,956-08:00] [192.168.33.22:9081] status is [200 OK] [2015-12-09T18:30:45,956-08:00] Overall status Healthy ...

6. Configure a load balancer to distribute traffic to the Events Service cluster, as described in Load Balance Events Service Traffic . 7. Connect the Controller and other clients—Analytics Agent, EUM Server, or Database Monitoring agents—to the Events Service, as described in Connect to the Events Service.

Expanding a Cluster (4+ Nodes)

The Events Service cluster is horizontally scalable. You can add nodes to an existing cluster without affecting or having to restart the existing nodes.

Before starting, prepare the new cluster machine. Verify system requirements and prepare the environment as described above.

For each node beyond the original three master nodes, download and configure the nodes as previously described. The configuration steps for any nodes added to the cluster after the initial three master nodes are as follows:

1. For each cluster nodes beyond the initial three master nodes, open the conf\events-service-api-store.properties f or editing and make these configuration changes: a. Set the ad.es.node.master value to false:

ad.es.node.master=false

b. Set the ad.es.node.minimum_master_nodes value to 2.

ad.es.node.minimum_master_nodes=2

c.

Copyright © AppDynamics 2012-2017 Page 104 1.

c. Set the value of ad.es.event.index.shards= the number of nodes in the cluster. As of 4.2.1, it is unnecessary to change this value if it is already higher than the number of nodes.

ad.es.event.index.shards=

As of 4.2.1, it is unnecessary to change this value if it is already higher than the number of nodes.

d. For the unicast hosts property, add the hostnames or IP addresses of all nodes in the cluster, including the node you are adding. For each node specify the ports on which the nodes communicate, 9300-9400. For example:

ad.es.node.unicast.hosts=node1.example.com[9300-9400],node 2.example.com[9300-9400],node3.example.com[9300-9400],node 4.example.com[9300-9400]

You do not need to reconfigure the unicast hosts settings for existing cluster members, as the new node can join the cluster dynamically. e. Change the publish host to the IP address or hostname of this machine. For example:

ad.es.node.network.publish.host=node4.example.com

f. Disable health checking:

ad.es.health.tool.enabled=false

Health checking is intended for the first three nodes in the cluster only. g. While editing the events-store-api-store.properties file, scroll to the end of the file and verify the minimum and maximum settings for Events Service process heap allocation.

ad.jvm.heap.min=8g ad.jvm.heap.max=8g

h. Save and close the file. 2. Install the Events Service as a Windows service:

bin\events-service.exe service-install -p conf\events-service-api-store.properties --auto-start

The optional auto-start flag causes the Events Service to be installed as an automatically started service. If you do not include the flag, the Events Service is installed as a manually started service. An additional option, log-verbose, increases the verbosity of installation and operation logging, which is useful for troubleshooting. 3. Enter the following command to find the service name for the Events Service:

bin\events-service.exe service-list

4. Pass the service name returned by the service-list command as the -s parameter argument in the following command:

Copyright © AppDynamics 2012-2017 Page 105 4.

bin\events-service.exe service-start -s ""

5. Check the health of the new node:

bin\events-service.exe check-health -hp localhost:9081

Note: At least two nodes must be running before you run the command.

For the port, pass the administration port for the Events Service, 9081 by default. Verify that "Healthy" appears as the service status, indicating that the process is operating normally:

[appduser@controller-one events-service]$ bin/events-service.exe check-health -hp 192.168.33.22:9081 [2015-12-09T18:30:45,342-08:00] HV000001: Hibernate Validator 5.0.2.Final [2015-12-09T18:30:45,956-08:00] Individual statuses below: [2015-12-09T18:30:45,956-08:00] [192.168.33.22:9081] status is [200 OK] [2015-12-09T18:30:45,956-08:00] Overall status Healthy ...

6. Modify your load balancer rules to include the new cluster node. For more information, see Load Balance Events Service Traffic.

Monitoring Cluster Node Health

The check-health command returns the status of an Events Service node or cluster. You can specify a node to check by properties file or IP address and port. For example, by properties file, use:

bin/events-service.exe check-health --properties conf/events-service-api-store.properties

To check multiple Events Service instance, use the -hp argument and pass space-separated IP address and the admin port numbers of the Events Service hosts as space-separated arguments. For example:

bin/events-service.exe check-health -hp 192.168.32.231:9081 192.168.32.232:9081 192.168.33.231:9081

The default administration port is 9081. This port on each Events Service node must be accessible from the machine on which you run the command.

If the Events Service node is operating normally, the output indicates a status of Healthy for the node.

Removing Nodes from a Cluster

Copyright © AppDynamics 2012-2017 Page 106 To remove a node that is not enabled for operation as a master node from the cluster, simply stop the Events Services on the node or remove the machine it runs on from the network.

If you remove a master node, you need to reconfigure an existing node to enable operation as a master node, or add a new node with the master option enabled:

ad.es.node.master=true

If reconfiguring a node, restart the node after changing the configuration.

Starting and Stopping the Events Service

At installation, the Events Service is installed as a service and is left running upon completion of the installation. You can stop and stop it as a service or as a foreground process, as described here.

Starting and Stopping as a Foreground Process

To start the Events Service as a foreground process, however, use the following command:

bin\events-service.exe start -p conf\events-service-api-store.properties

To stop the Events Service as a foreground process, use this command:

bin\events-service.exe stop

Stopping and Starting as a Windows Service

You can stop and start the Events Service as a Windows service from the Services Manager. You can also stop and start it using the events-service.exe tool, as here:

1. Enter the following command to find the service name for the Events Service:

bin\events-service.exe service-list

2. Pass the service name returned by the service-list command as the -s parameter argument to the following command. Enclose the service name in double quotes.

bin\events-service.exe service-start -s ""

To stop the service, run this command:

bin\events-service.exe service-stop -s ""

Copyright © AppDynamics 2012-2017 Page 107 Uninstalling Events Service as a Windows Service

You can remove the Events Service as a Windows service after installation. This command retains the Events Service on the machine.

To remove the Events Service as a Windows service:

1. Use the list service command to find the service name for the Events Service:

bin\events-service.exe service-list

Starting the ZooKeeper alone only brings up the process that manages index rollover. The Events Service node is not fully started until you start the API-Store process as well, as described next. 2. Use the name returned for the service as the -s parameter argument to the following command:

bin\events-service.exe service-uninstall -s ""

Be sure to enclose the name in double quotes.

Load Balance Events Service Traffic

On this page:

About Load Balancing Events Service Traffic Nginx Sample Configuration HA Proxy Sample Configuration: Terminating SSL at the Load Balancer

This topic describes how to configure a sample load balancer for the Events Service.

About Load Balancing Events Service Traffic

To distribute load among the members of an Events Service cluster, you need to set up a load balancer. For a single node Events Service deployment, using a load balancer is optional but recommended, since it minimizes the work of scaling up to an Events Service cluster later.

To configure the load balancer, add the Events Service cluster members to a server pool to which the load balancer distributes traffic on a round-robin basis. Configure a routing rule for the primary port (9080 by default) of each Events Service node. Every member of the Events Service cluster, master node or not, needs to be included in the routing rule. Keep in mind that increasing the size of the cluster will involve changes to the load balancer rules described here.

The following figure shows a sample deployment scenario. The load balancer forwards traffic for the Controller and any Events Service clients, Analytics Agents in this example.

Copyright © AppDynamics 2012-2017 Page 108 The following instructions describe how to install and configure a load balancer for the Events Service. The steps below provide two examples: load balancing with an Nginx and load balancing with HAProxy with SSL termination at the load balancer. The steps demonstrate commands in a CentOS 6.6 Linux operating system environment. Adjust the steps as appropriate for your operating system or any other site-specific requirements.

Nginx Sample Configuration

1. Install the Nginx software. You can install Nginx on most Linux distributions using the built-in package manager for your type of distribution, such as apt-get or yum. On a CentOS system, you can use yum as follows:

sudo yum install epel-release sudo yum install nginx

2. Add the following configuration configuration to a new file under the Nginx configuration directory, for example, to /etc/nginx/conf.d/eventservice.conf.

Copyright © AppDynamics 2012-2017 Page 109 2.

upstream events-service-api { server 192.3.12.12:9080; server 192.3.12.13:9080; server 192.3.12.14:9080; server 192.3.12.15:9080; keepalive 15; } server { listen 9080; location / { proxy_pass http://events-service-api; proxy_http_version 1.1; proxy_set_header Connection "Keep-Alive"; proxy_set_header Proxy-Connection "Keep-Alive"; } }

In the example, there's a single upstream context for the API-Store ports on the cluster members. By default, Nginx distributes traffic to the hosts on a round-robin basis. 3. Check the following operating system settings on the machine: Permit incoming connections in the firewall built into the operating system, or disable the firewall if it is safe to do so. On CentOS 6.6, use the following command to insert the required configuration in iptables:

sudo iptables -I INPUT -p tcp --dport 9080 -j ACCEPT

To turn off the firewall, you can run these commands

sudo service iptables save sudo service iptables stop sudo chkconfig iptables off

Disable if necessary selinux security enforcement by editing /etc/selinux/config and setting SELINUX=disabled. Restart the computer for this setting to take effect. 4. Start Nginx:

sudo nginx

Nginx starts and now direct traffic to the upstream servers. If you get errors regarding unknown directives, make sure you have the latest version of Nginx.

HA Proxy Sample Configuration: Terminating SSL at the Load Balancer

By terminating SSL at the load balancer in front of the Events Service cluster, you can relieve the Events Service machines from the processing burden of SSL. Since the connections between the load balancer and Events Service machines are not secured in this scenario, it is only suitable for deployments in which the load balancer and Events Service machines reside within an internal, secure network.

Copyright © AppDynamics 2012-2017 Page 110 The following instructions describe how to set up SSL termination at the load balancer. These steps use HAProxy as the example load balancer. An overview of the steps are:

Step 1: Install the HAProxy Software Step 2. Create the Security Certificate Step 3. Configure the Load Balancer Step 4: Configure the Agent Step 5: Configure the Controller

The following diagram shows a sample deployment reflected in the configuration steps:

Before Starting

To perform these steps, you need:

Root access on the load balancer machine OpenSSL installed on the load balancer machine HAProxy software (minimum version HAProxy 1.5) on the load balancer machine

Step 1: Install the HAProxy Software

If not already installed, install HAProxy on the load balancer machine. The manner in which you install it depends on your operating system and the package manager it uses. If using yum package manager on Linux, for example, enter the following command:

sudo yum install haproxy

Copyright © AppDynamics 2012-2017 Page 111 Step 2. Create the Security Certificate

The security certificate secures the connection between the load balancer and Events Service clients, including the Application Analytics Agent. You can use a self-signed certificate or a certificate signed by a certificate authority (CA) to secure the connection between the load balancer and clients. The following steps walk you through each scenario:

Create a Self-Signed Certificate on the Load Balancer Machine Create a CA-Signed Certificate

For production use, AppDynamics strongly recommends the use of a certificate signed by a third-party CA or your own internal CA rather than a self-signed certificate.

Create a Self-Signed Certificate on the Load Balancer Machine

1. From the command line prompt on the Load Balancer machine, create a directory for the certificate resources and change to that directory:

sudo mkdir -p /etc/ssl/private cd /etc/ssl/private/

2. Create the certificate by running the following command, replacing with the number of days for which you want the certificate to be valid, such as 365 for a full year:

sudo openssl req -x509 -nodes -days -newkey rsa:2048 -keyout ./events_service.key -out ./events_service.crt

3. Respond to the prompts to create the certificate. For the Common Name, enter the hostname for the load balancer machine as identified by external DNS (that is, the hostname that agents will use to connect to the Events Service). This is the domain that will be associated with the certificate. 4. Put the certificate artifacts in a PEM file, as follows:

chmod 600 events_service.crt events_service.key cat events_service.crt events_service.key > events_service.pem chmod 600 events_service.pem

Create and Install a Certificate Signed by a Certificate Authority

1. From the command line prompt of the Load Balancer machine, create a directory for the certificate resources and change to that directory:

sudo mkdir -p /etc/ssl/private cd /etc/ssl/private/

2. Generate a Certificate signing request (CSR) based on the private key. For example:

openssl req -new -sha256 -key /etc/ssl/private/events_service.key -out /etc/ssl/private/events_service.csr

3.

Copyright © AppDynamics 2012-2017 Page 112 3. Submit the events_service.csr file to a third-party CA or your own internal CA for signing. When you receive the signed certificate, install it and the CA authority root certificate. 4. Depending on the format of the certificates returned to you by the Certificate Authority, you may need to put the certificate and key in PEM format, for example:

chmod 600 events_service.key cat events_service.key > events_service.pem chmod 600 events_service.pem

In the command, replace with the certificate returned to you by the Certificate Authority. Also, as shown, include any intermediate CA certs, if present, when created the PEM file.

Step 3. Configure the Load Balancer

1. Open the HAProxy configuration file for editing, /etc/haproxy/haproxy.cfg. 2. Insert the following configuration at the end of the file. Replace the placeholder addresses with the host names or IP addresses of the cluster machines. The port should be the primary listening ports of the Events Service nodes.

frontend events_service_frontend bind *:9443 ssl crt /etc/ssl/private/events_service.pem mode tcp reqadd X-Forwarded-Proto:\ https default_backend events_service_backend

backend events_service_backend mode tcp balance roundrobin server node1 192.3.12.12:9080 check server node2 192.3.12.13:9080 check server node3 192.3.12.14:9080 check

3. Start the HAProxy load balancer:

sudo service haproxy restart

Step 4: Configure the Agent

Perform these steps on each machine on which the Analytics Agent runs.

1. Transfer a copy of the signed certificate, events_service.crt, to the home directory (denoted as $HOME in the instructions below) of the machine running the agent using Secure Copy (scp) or the file transfer method you prefer. 2. Copy the certificate file to the directory location of the trust store used by the agent:

cp $HOME/events_service.crt $JAVA_HOME/jre/lib/security/

3. Navigate to the directory and make a backup of the existing cacerts.jks file:

Copyright © AppDynamics 2012-2017 Page 113 3.

cd $JAVA_HOME/jre/lib/security/ cp cacerts.jks cacerts.jks.old

4. Import the certificate into the Java keystore: If using a signed certificate, import the certificate as follows:

keytool -import -trustcacerts -v -alias events_service -file /path/to/CA-cert.txt -keystore cacerts.jks

If using a self-signed cert, import the certificate as follows:

keytool -import -v -alias events_service -file events_service.crt -keystore cacerts.jks

When prompted, enter the password for the truststore (default is changeit) and enter yes when asked whether to trust this certificate. 5. Verify that the certificate is in the truststore:

keytool -list -keystore cacerts.jks -alias events_service

6. Navigate to the installation folder of the Analytics Agent and edit conf/analytics-agent.properties to change the value of the HTTP endpoint property:

http.event.endpoint=https://:9443/v1

7. Start the Analytics Agent (or restart it, if it is already running). 8. Check the health of the agent. In a web browser, you can do so by going to the health check URL at http://:9091/healthcheck?pretty=true.

If the agent is operating normally, the healthy field is set to true, as in the following example:

"analytics-agent / Connection to https://:9443/v1" : { "healthy" : true }

Step 5: Configure the Controller

If not already done, configure the connection from the Controller to the Events Service through the load balancer using a secure connection as well:

1. Transfer a copy of the signed certificate, events_service.crt, to the home directory (denoted as $HOME in the instructions below) of the machine running the Controller using Secure Copy (scp) or the file transfer method you prefer. 2. Navigate to the directory containing the Controller trust-store (as determined by the Controller startup parameter -Djavax.net.ssl.trustStore).

3.

Copyright © AppDynamics 2012-2017 Page 114 3. Make a backup of the existing cacerts.jks file:

cp cacerts.jks cacerts.jks.old

4. Import the certificate into the Java keystore: If using a signed certificate, import the certificate as follows:

keytool -import -trustcacerts -v -alias -file /path/to/CA-cert.txt -keystore cacerts.jks

If using a self-signed cert, import the certificate as follows:

keytool -import -v -alias events_service -file events_service.crt -keystore cacerts.jks

When prompted, enter the password for the truststore (default is changeit) and enter yes when asked whether to trust this certificate. 5. Verify that the certificate is in the truststore:

keytool -list -keystore cacerts.jks -alias events_service

6. Restart the Controller. 7. From the Administration Console, search for the following Controller Setting: appdynamics.analytics.server.store.ur l. 8. Set its value to the Load Balancer URL value: https://:9443/v1

You can now verify that the Analytics UI is accessible and showing data.

Connect to the Events Service

On this page:

Configuring Connections Platform Connection Settings Overview Database Monitoring Connection Settings End User Monitoring Connection Settings

After installing the Events Service, configure the connection for the AppDynamics components that send data to the Events Store.

Configuring Connections

The components that send data to the Events Service need to know how to connect to the Events Service. This is determined by Controller Settings in the Controller Administration Console (see Access the Administration Console).

The settings by component are:

Application Analytics: appdynamics.analytics.server.store.url Database Monitoring: appdynamics.analytics.local.store.url

Copyright © AppDynamics 2012-2017 Page 115 End User Monitoring: eum.es.host

The value should be the Events Service endpoint, whether connecting directly to a single node Events Service or the endpoint as exposed by a load balancer. For example, http://:9080.

Platform Connection Settings Overview

Along with connection URLs, the AppDynamics platform relies on API keys to establish connections between components. The following provides an overview of which keys and connection strings you need to configure and need to match, depending on which components you are using.

The keys for each component are located in the Controller Settings in the Administration Console and have a corresponding key in the properties file in the component home directory. Note that values are automatically generated for these keys. However, they can be any value, as long as the values of corresponding keys match.

Database Monitoring Connection Settings

The Database Monitoring module stores a portion of its data (such as WaitState Info, Query info, and so on) in the Events Service. In installations that have both Analytics and Database Monitoring, we recommend they share an Events Service instance or cluster instance. When configuring connection settings, implementing this recommendation means that:

The appdynamics.analytics.server.store.url value (used by Analytics) should be the same as the appdynamics.a nalytics.local.store.url value (used by Database Monitoring), which should match the Events Service endpoint URL (for example, http://eventsservice.example.com:9080). The appdynamics.analytics.server.store.controller.key value (used by Analytics) should match the appdynam ics.analytics.local.store.controller.key value (used by Database Monitoring), both of which should match the value of ad.accountmanager.key.controller in the Events Service.

End User Monitoring Connection Settings

To connect both Application Analytics and EUM to the Events Service:

1. In the file /bin/eum.properties, set the value of the properties analytics.serverScheme, analytics.s erverHost, and analytics.port to the HTTP protocol, hostname, and listening port used to connect to the Events Service. For example, if the URL used to connect to the Events Service is http://:9080, you would set the properties as follows: analytics.serverScheme: http analytics.serverHost: analtyics.serverPost: 9080 2. From the Admin Console, navigate to Controller Settings and set the eum.es.host property to the URL to the Events Service. 3. Again, from Controller Settings, find the property appdynamics.es.eum.key and assign its value to the property ad.acco untmanager.key.eum in the file /events_service/conf/events-service-api-store.properties.

The value of appdynamics.es.eum.key will automatically be set to the property analytics.accountAccessKey of the file /bin/eum.properties.

Upgrade the Events Service

On this page:

About the Upgrade Upgrade the Events Service

This topic describes how to upgrade a manually installed standalone Events Service. The Events Service instance that is embedded with the Controller is upgraded automatically when you upgrade the Controller using the installer. For non-embedded (that is, remote)

Copyright © AppDynamics 2012-2017 Page 116 Events Service nodes, you need to follow the steps here to upgrade the Events Service software to 4.2.

If you manage the Events Service using the Platform Administration Application, see Install the Events Service on Linux for upgrade instructions.

About the Upgrade

This page describes how to upgrade a manually installed Events Service instance.

The procedure requires the Event Service process to be briefly shut down. If you have deployed the Events Service in a cluster, you can perform the upgrade procedure as a rolling upgrade, avoiding service downtime. As a rolling upgrade, you upgrade each machine while the other cluster nodes continue to respond to client requests.

For an Events Service deployment you have installed and manage with the Platform Administration Application, you can use the Platform Administration Application. To use the application to upgrade the Events Service, you must upgrade the Controller, which also upgrades the Platform Administration Application version.

Perform the upgrade in the following order if you did not use the Platform Administration Application to install the Events Service:

1. Upgrade the Events Service 2. Upgrade the EUM Server 3. Upgrade the Controller

Perform the upgrade in the following order if you used the Platform Administration Application to install the Events Service:

1. Shutdown the EUM Server 2. Upgrade the Controller 3. Upgrade the Events Service with the application. 4. Upgrade the EUM Server.

Note that upgrading platforms that use the Platform Administration Application results in downtime of the EUM Server.

Upgrade the Events Service

1. Download the Events Service distribution, events-service.zip, from the AppDynamics download site to the Events Service machine. 2. Stop the Events Service processes:

bin/events-service.sh stop

3. Rename the existing Events Service directory, for example, to events-service-backup. 4. Unzip the Events Service distribution archive you downloaded to the location where you want the Events Service to run. 5. Migrate configuration changes from the properties files in the backup Events Service directory to the conf\events-store-a pi-store.properties file in the new Events Service directory. Depending on which type of deployment you are using, this involves inspecting and migrating settings from: events-service-all.properties, or events-service-api-store.properties 6. Verify that the new Events Service home directory exists. The Event Service home directory is determined by the ad.es.path.home property in the property file used to start up the Events Service. If the directory does not exist, create it. For example, create the following directory: /opt/appdynamics/events-service/appdynamics-events-service 7. Move (do not copy) the old Events Service data directory to the new Events Service home directory directory. For example:

mv /opt/appdynamics/events-service-backup/appdynamics-events-servi ce/data /opt/appdynamics/events-service/appdynamics-events-service/

8. Restart the Events Service processes from the new directory:

Copyright © AppDynamics 2012-2017 Page 117 8.

nohup bin/events-service.sh start -p conf/events-service-api-store.properties &

9. Check the health API of the node.

For information on performing these steps, see Install the Events Service on Windows.

Backup and Restore Events Service Data

On this page:

Prepare the File System Create a Snapshot Restore from a Snapshot Migrating Events Service Data

Backing up Events Service data helps you to recover from hardware or other type of failure of an Events Service machine. A snapshot represents the backed up data for the entire Events Service cluster. In addition to using it for failure recovery, you can use a snapshot to migrate Events Service to a new instance.

The Events Service tool—events-service.sh for Linux and events-service.exe for Windows—includes commands for preparing the system for backing up with snapshots, generating a snapshot, and restoring from a snapshot, as described below.

The following instructions show sample commands for Linux. If using Windows, be sure to use the events-service.exe form of the executable rather than the .sh form, and adjust the sample directory paths as needed.

Prepare the File System

When planning your backup strategy, it is important to consider the storage location and frequency of backups. The system that will serve at the repository for snapshots must be able to handle high I/O demands in a performant manner. SSD-based storage is recommended. Also ensure you have enough disk space on the repository system.

Only the first snapshot results in a full copy of the data. Each subsequent snapshot is incremental, applying only the changes since the last snapshot. Backing up frequently, therefore, does not result in substantially more storage overhead than backing up infrequently.

The Events Service includes tools for setting up the snapshot repository. It supports the following snapshot repository location types:

A file system location that is shared among the Events Service nodes. An Amazon S3 bucket

After choosing and preparing the system that will host the snapshot, set up each Events Service node as follows:

1. If using FS, mount the shared filesystem at the default location for the backup repository, /events-service/appdyn amics-events-service-backup. To change this backup location, set the the ad.es.backupmanager.path.repo setting in co nf/events-service-api-store.properties. Keep in mind, however, that changing the properties file requires a restart of the Events Service node to have the change take effect. 2. Set up the repository on each node. Use the appropriate command for your repository type: For shared file system:

bin/events-service.sh snapshot-configure-fs -p conf/events-service-api-store.properties

Copyright © AppDynamics 2012-2017 Page 118 2.

For Amazon S3:

bin/events-service.sh snapshot-configure-s3 -p conf/events-service-api-store.properties -bucket "s3-bucket-name"

The snapshot-configure-s3 command accepts additional optional arguments, including arguments for passing the access key and secret key for S3. Run "bin/events-service.sh -h" to view all options.

Look for a message similar to the following to verify that the configuration succeeded:

[2015-12-17T15:43:04,092-08:00] Successfully configured snapshot repository!

Create a Snapshot

After setting up the repository, you can generate a snapshot of the Events Service data. If you are backing up a cluster, you only need to run the command from one of the master nodes.

Generate a snapshot:

bin/events-service.sh snapshot-run -p conf/events-service-api-store.properties

You can use this command to script regular backups based on your backup policy. The following output indicates that backup was successful:

The following output indicates that backup was successful:

Take snapshot request executed successfully. Snapshot itself may still be in progress.

To check the progress of the snapshot, use snapshot-status.

bin/events-service.sh snapshot-status -p conf/events-service-api-store.properties [2015-12-17T17:00:09,955-08:00] HV000001: Hibernate Validator 5.0.2.Final [2015-12-17T17:00:10,589-08:00] Overall restore status for cluster [appdynamics-events-service-cluster] is [SUCCESS]. [2015-12-17T17:00:10,589-08:00] In 171 milliseconds, restore has processed: [2015-12-17T17:00:10,589-08:00] 22 of 22 files. [2015-12-17T17:00:10,589-08:00] 13054 of 13054 bytes.

If you don't specify a snapshot ID, the command gets the status for the most recent snapshot. You can use list-snapshot command to see a list of available snapshots.

Copyright © AppDynamics 2012-2017 Page 119 Restore from a Snapshot

By restoring a snapshot, you replace the data store configured for the Events Service with one that was previously saved as a snapshot.

To restore a snapshot, use the snapshot-restore command, passing the properties file for the Events Service instance you are backing up. The following shows an example with sample output:

bin/events-service.sh snapshot-restore -p conf/events-service-api-store.properties [2015-12-17T17:02:52,264-08:00] HV000001: Hibernate Validator 5.0.2.Final [2015-12-17T17:02:52,811-08:00] Restore snapshot request executed successfully. Restore is now in progress. Use the snapshot-status command to view the current restore status.

Check the status of the snapshot restore using the snapshot-status command. For example:

bin/events-service.sh snapshot-status -p conf/events-service-api-store.properties [2015-12-17T17:03:03,548-08:00] HV000001: Hibernate Validator 5.0.2.Final [2015-12-17T17:03:04,135-08:00] Overall restore status for cluster [appdynamics-events-service-cluster] is [SUCCESS]. [2015-12-17T17:03:04,135-08:00] In 171 milliseconds, restore has processed: [2015-12-17T17:03:04,135-08:00] 22 of 22 files. [2015-12-17T17:03:04,135-08:00] 13054 of 13054 bytes.

You can restore a specific snapshot by passing the snapshot ID with the command. Otherwise, the most recent snapshot is restored.

bin/events-service.sh snapshot-restore -p conf/events-service-api-store.properties -id

You can use list-snapshot command to get a list of snapshot IDs.

Migrating Events Service Data

In addition to data backup and recovery, you can use the snapshot utility to migrate data from one Events Service instance to another.

The target Events Service needs to be a fresh installation; that is, data in two different Events Service instances cannot be merged. Be sure to avoid configuring the Events Service URL with the new instance location in the Controller configuration until you have completed these steps.

To migrate Events Service data, follow these general steps:

1. Prepare the new Events Service nodes, as described above. 2. On each new Events Service node, mount the shared directory where the repository is located. 3. From a master node in the new cluster, restore the snapshot by ID, as described above, passing the property file that defines the new cluster as the -p argument. 4. When finished, change the connection from the Controller to the Events Service and any Events Service clients, as described in Connect to the Events Service, to use the new instance.

Copyright © AppDynamics 2012-2017 Page 120

Install the EUM Server

On this page:

Installation Overview Deployment Modes for the EUM Server Embedded Geo Server System Requirements for the EUM Server Check an In-Service Controller for the EUM Server Install the On-Premises Events Service Run the EUM Server Installer Update Your Agents Start and Stop the EUM Server

Search the On-Premises EUM Server topics:

The default End User Monitoring deployment assumes that EUM agents (Mobile and Browser) send their data to the EUM Cloud, a cloud-based processor. To deploy EUM completely on-premises, you need to install the EUM Server, the on-premises version of the EUM Cloud, as described here.

Installation Overview

The EUM Server receives data from EUM agents, processes and stores that data, and makes it available to the AppDynamics Controller. Certain EUM features—specifically, Browser Request Analytics and Mobile Request Analytics, features of Application Analytics that extend the functionality of Browser and Mobile Analyze—require access to the AppDynamics Events Service.

To set up a complete on-premises EUM Server deployment, therefore, you need to:

1. Install the on-premises Controller or prepare an in-service Controller to work with the EUM Server 2. Install the on-premises Events Service and configure it to work with your on-premises Controller 3. Install the on-premises EUM Server and configure it to work with your Events Service and Controller.

Deployment Modes for the EUM Server

For demonstration and light testing purposes, the EUM Server can be deployed to the same host as the Controller. For production, however, the EUM Server must be deployed to a separate host.

To install the components on a single host, you run the EUM installer only once on the target machine. For a dual (or split) host installation, you run the EUM installer twice: first on the Controller host, to configure the Controller to work with the EUM Server, and then on the EUM Server host, to install and configure it.

In single host mode, the EUM Server listens for connections on port 7001 or 7002. The secure port, 7002, uses a built-in, self-signed certificate.

Copyright © AppDynamics 2012-2017 Page 121

In a production environment, the EUM Server is likely to operate behind a reverse proxy. A reverse proxy relieves the performance burden of SSL termination from the EUM Server. It also helps ease certificate management and security administration in general. Further, as the connection point for agent beacons, the Server needs to have the security layer of a proxy between itself and the external Internet.

Copyright © AppDynamics 2012-2017 Page 122 Using a reverse proxy is the recommended method of setting up HTTPS connections for an on-premises EUM Server. If this is not possible in your installation, however, it is possible to set HTTPS support manually. See information on setting up a custom keystore in Secure the EUM Server.

Embedded Geo Server

The EUM Server includes an embedded Geo Server that provides the geo information. The Geo Server either obtains geo information from your custom Geo Server or by resolving incoming IPv4 address (IPv6 addresses are not supported) with MaxMind open source geo data, Neustar data, or custom geo data.

Add New Geo Data

To add new geo data, you can either add a new MaxMind data file, switch to use Neustar data, or create the custom geo data file geo-ip-mappings.xml and place it in the directory eum-processor/bin/. The EUM Server automatically detects and loads new geo data files.

Update the Geo Server

The Geo Server is updated when you update the EUM Server.

Host Your Own Geo Server

Follow the instructions given in Install and Host a Custom Geo Server for Browser RUM.

System Requirements for the EUM Server

The requirements and guidelines for the EUM Server machine (basic usage) are:

Copyright © AppDynamics 2012-2017 Page 123 40 GB extra disk space 64-bit Windows or Linux operating system Processing: 4 cores 10 Mbps network bandwidth Minimum 8 GB memory total (4 GB is defined as max heap in JVM) NTP enabled on both the EUM Server host and the Controller machine. The machine clocks need to be able to synchronize.

A machine with these specs can be expected to handle around 10K page requests a minute or 10K simultaneous mobile users. Adding on-premises Analytics capability requires increasing these requirements—particularly disk space—considerably, depending on the use case.

Inode Requirements

The filesystem of the machine on which you install EUM should be tuned to handle a large number of small files. In other words, the filesystem should be allocated with a large number of inodes or the filesystem should support dynamic inode allocation.

Open File Descriptor and User Process Limits

On Linux, also ensure that open file descriptor and user process limits on the EUM Server machine are set to a sufficient value. For the EUM Server, the hard and soft limits should be as follows:

Open file descriptor limit (nofile): 65535 Process limit (nproc): 65535

See Configure Linux for the Controller for information on how to check and set user limits.

Check an In-Service Controller for the EUM Server

The EUM Server installer makes these changes to the Controller:

Creates a new database schema in the MySQL database shared with the Controller Registers the EUM Server instance with the Controller Provisions the EUM-enabled license in the Controller

Before you run the installer on an in-service Controller do the following:

1. Check your Controller version. The 4.2 EUM Server works with the AppDynamics Controller version 4.2 or earlier. Controllers only work with the same or later versions of the EUM Server. That is, a version 4.2 Controller must only be used with a 4.2 or later version of the EUM Server. See Upgrade the EUM Server on information about upgrading the platform. 2. Back up the current version of your Controller. 3. Choose a time window that has minimum impact on service availability. The Controller must be restarted for the configuration changes made by the EUM Server installer to take effect.

Install the On-Premises Events Service

Copyright © AppDynamics 2012-2017 Page 124 The Analyze function in Browser RUM and the Crash Report and Analyze components in Mobile RUM rely on the AppDynamics Events Service, the Platform's unstructured document store. The Events Service that is configured by default for EUM is a cloud-based service. If you are running on-premises and wish to keep all your processing on-prem, after installing and configuring the Controller you must install an on-premises version of the Events Service as described in this section. Note that relying on the Events Service purely for use with the EUM UI does not require a separate Application Analytics license. Other uses may require a separate license.

There are multiple modes of deploying the Events Service. For detailed information on installing and configuring the Events Service, see Install the Events Service.

Run the EUM Server Installer

Run the EUM installer under the same user account on the target machine as the one used to install the Controller, or using an account that has read, write, and execute permissions to the Controller home directory. Installing with incompatible permission levels—for example, attempting to install the EUM Server as a regular user while the Controller was installed by root user—may result in installation or operation errors.

The EUM Server is automatically installed as a Windows service. All upgrades are automatically converted to a Window service.

The installer can be run in three modes:

GUI Console Silent mode with varfile

See the following page for details on installing as appropriate for your deployment mode:

For a demo single host mode, see Install a Single Host (Demo) EUM Server. For a production dual host mode, see Install a Split Host (Production) EUM Server.

Update Your Agents

You must update the address that agents use to send their beacons to the EUM Server based on your configuration. For Browser Real User Monitoring, the Controller updates the JavaScript agent. Simply re-download and deploy it, as described in Set Up and Configure

Copyright © AppDynamics 2012-2017 Page 125 Browser RUM. For Mobile RUM, the mobile applications themselves need to updated, using the mobile SDKs. For more information, see Use the APIs of the Android SDK to Customize Your Instrumentation and Use the APIs of the iOS SDK to Customize Your Instrumentation.

Start and Stop the EUM Server

The EUM Server is installed as a Windows service automatically. You can manage how you want this service to run using the Local Services dialog.

On Linux, start the EUM server from the eum-processor directory in the EUM home as follows:

bin/eum.sh start

For a demonstration environment, run the command as sudo.

On Windows, if you ever need to start the EUM Server manually, you can do so by running:

bin\eum-processor.bat start

You can check if the server is running and accessible by going to http://:7001/eumaggregator/ping with your browser. Your browser should display ping.

To stop the EUM Server, pass the stop command to the eum script. For example, on Linux, from the eum-processor directory, run:

bin/eum.sh stop

Install a Split Host (Production) EUM Server

On this page:

About the Split Host Installation Configure the Controller for the EUM Server with the GUI Installer Install the EUM Server for a Production Deployment with the GUI Installer Use a Response File for a Split Host Installation

You can run the installer in one of three modes. The GUI and silent installation methods are described below. To start the installer in interactive console mode, start the installer with the -c switch. The console mode prompts you for the equivalent information that appears in the GUI installer screens, as described below.

About the Split Host Installation

Installing on-premises EUM for a production environment requires running the EUM installer on two machines:

On the Controller host On the separate EUM Server host

The EUM installer must be run on the Controller host before you run the installer on the EUM Server host.

Get the EUM installer from the AppDynamics Download Center. Before starting, download the installer distribution and extract it on the target machine and on the Controller machine.

To secure connections from agents to the EUM Server, AppDynamics strongly recommends that SSL traffic is terminated at a reverse proxy that sits in front of the EUM Server in the network path and forwards connections to the EUM Server. If this is not

Copyright © AppDynamics 2012-2017 Page 126 possible in your installation, however, it is possible to connect with HTTPS directly to the EUM Server, as described below. See information on setting up a custom keystore for production in Secure the EUM Server for more information.

Configure the Controller for the EUM Server with the GUI Installer

To configure the Controller for use with an on-premises EUM Server in production mode, you must first run the EUM Server installer on the Controller host. This configures the Controller so that it knows where the EUM Server is and adds an account in the database for the EUM Server.

Run the EUM Server installer under the same user account as the one used to install the Controller, or using an account that has read, write, and execute permissions to the Controller home directory. Installing with incompatible permission levels—for example, attempting to install the EUM Server as a regular user while the Controller was installed by root user—may result in installation or operation errors.

To configure the Controller for the EUM Server:

1. Start the installer: On Linux: a. From a command prompt, navigate to the directory to which you downloaded the EUM installer. b. Change permissions on the downloaded installer script to make it executable, as follows:

chmod 775 euem-64bit-linux.sh

c. Run the script as follows:

./euem-64bit-linux.sh

On Windows: a. Open an elevated command prompt (run as administrator) and navigate to the directory to which you downloaded the EUM installer. b. Run the installer:

euem-64bit-windows

2. In the Welcome screen, click Next. 3. In the License Agreement page, scroll to the end of the license agreement, accept the license agreement, and click Next to continue. 4. Select the directory in which you want to install the EUM Server and click Next. 5. Choose Split Host Production Controller Configuration for the installation mode and click Next. In this mode, the installer looks for a Controller on the current machine and configures it to work with on-premises EUM. 6. In the Host information settings, enter the hostname for the EUM Server and the listening ports for the EUM server, 7001 and 7002 by default. Note that the hostname and ports shown here function as the location both to which the EUM Agents send their beacons and from which the Controller fetches the processed beacon information. Use the hostname for the machine as addressed on the network to ensure that the EUM Server can be registered correctly in the Controller configuration. Click Next. 7. In the Controller Information screen, enter the Controller home directory directly or using the Browse button. The directory must be the home directory for the running Controller instance. Click Next. 8. In the Database Information fields, enter the connection settings to the Controller database for the Controller's database root user, which was defined at Controller installation time. Since the database normally runs on the Controller machine, using loc alhost for the Controller Database host name typically works and helps you avoid permissions issues. The installer will create a table for EUM named eum_db in the database. 9. In the EUM Database Password field, enter a password for a new database user for EUM, eum_user and click Next. 10. When the installer is finished, note the information in the Complete page. It contains any next steps. Click Finish to close the installer.

The installer creates a folder named EUM in the AppDynamics home directory alongside the Controller home. You do not need to use

Copyright © AppDynamics 2012-2017 Page 127 anything in this folder.

Now run the installer on the EUM Server target machine, as described next.

Install the EUM Server for a Production Deployment with the GUI Installer

After setting up the Controller for use with the EUM Server, run the on-premises EUM installer on the machine on which you want to install the EUM Server. You must install and set up the Controller before attempting this step.

1. Start the installer: On Linux: a. From a command prompt, navigate to the directory to which you downloaded the EUM installer. b. Change permissions on the downloaded installer script to make it executable, as follows:

chmod 775 euem-64bit-linux.sh

c. Run the script as follows:

./euem-64bit-linux.sh

On Windows: a. Open an elevated command prompt (run as administrator) and navigate to the directory to which you downloaded the EUM installer. b. Run the installer:

euem-installer-64bit-windows

2. In the Welcome screen, click Next. 3. Scroll to the end of the license agreement, accept the license agreement, and click Next to continue. 4. Choose Split Host Production EUM Server Installation for the installation mode. This mode installs the EUM Server on this machine. Click Next. 5. If the Events Service is installed and you plan to use EUM analytics features, select the check box and configure connection settings to the Events Service, including the Events Service API Key. For the Events Service API key, put the value of the app dynamics.es.eum.key setting from the Controller Administration Console, as shown:

If the Events Service is not installed and you choose to enable it later, you can do so by configuring the following settings: On the EUM Server, the properties labeled "Analytics server properties" in the eum.properties file. On the Controller, the eum.es.host Controller Setting in the Administration Console. 6. In the Host Information screen, enter the hostname or IP address of the Controller. 7. In the same screen, enter the hostname or IP address of the EUM Server, along with the HTTP or HTTPS listening ports on the EUM Server at which the Controller will connect to the EUM Server (the default HTTP port is 7001 and HTTPS is 7002).

Do not use localhost for the EUM Server hostname, since this value is used to populate the EUM server connection settings on

Copyright © AppDynamics 2012-2017 Page 128 7.

the Controller. The following shows how the values you enter in this screen map to values in the Controller Settings in the Administration Console:

8. Set the heap size for the EUM Server and click Next. See system requirements in Install the EUM Server for more information about heap size. 9. In the Database Information screen, enter the connection information the EUM Server should use to connect to the Controller database. Also enter the password you specified for the EUM user when configuring the Controller in the previous section. 10. Enter a password for the EUM Server to use to store credentials securely in the credential store and click Next.

With the initial configuration information gathered, the installer completes the setup of the EUM Server. When finished, the EUM server is running.

Use a Response File for a Split Host Installation

To use the response file installation for a split host, you run the installation with the response.varfile twice: first on the Controller machine and then again on the EUM Server host.

To install with a response file:

1. Create the response file with the initial EUM Server settings. a. To set up the Controller host, set the installation mode property in the response file to the production-controller-setup, as shown:

euem.InstallationMode=production-controller-setup

b. To install and set up the EUM Server host, set the installation mode property in the response file to production-eum-setup before running the installer.

euem.InstallationMode=production-eum-setup

Copyright © AppDynamics 2012-2017 Page 129 1.

b.

The following shows the format of a response file, with sample settings:

sys.adminRights$Boolean=false sys.languageId=en sys.installationDir=/AppDynamics/EUM controller.Host=controllerhost controller.installationDir=/AppDynamics/Controller euem.InstallationMode=production-eum-setup euem.Host=eumserverhost euem.initialHeapXms=1024 euem.maximumHeapXmx=4096 euem.httpPort=7001 euem.httpsPort=7002 mysql.databasePort=3388 mysql.databaseRootUser=root mysql.dbHostName=localhost mysqlRootUserPassword=secret eumDatabasePassword=secret eumDatabaseReEnterPassword=secret keyStorePassword=secret keyStorePasswordReEnter=secret eventsService.isEnabled$Boolean=true eventsService.serverScheme=http eventsService.host=eventsservicehost eventsService.port=9080 eventsService.APIKey=1ab234567-1234-4321-a345-ab123cde456

2. Run the installer with the following command:

./euem-64bit-linux.sh -q -varfile response.varfile

On Windows, use:

euem-64bit-windows.bat -q -varfile response.varfile

Install a Single Host (Demo) EUM Server

On this page:

About the Single Host Installation Install with the GUI Installer Using the Silent Installer

You can run the installer in one of three modes. The GUI and silent installation methods are described below. To start the installer in interactive console mode, start the installer with the -c switch. The console mode prompts you for the equivalent information that appears in the GUI installer screens, as described below.

Copyright © AppDynamics 2012-2017 Page 130 About the Single Host Installation

This mode is for demonstration and light testing only. If you are using the Events Service, it must be on a separate host.

Run the EUM Server installer under the same user account as the one used to install the Controller, or using an account that has read, write, and execute permissions to the Controller home directory. Installing with incompatible permission levels—for example, attempting to install the EUM Server as a regular user while the Controller was installed by root user—may result in installation or operation errors.

1. If you do not already have an existing on-premises Controller, install it as described in Install the Controller. 2. Get the EUM installer from the AppDynamics Download Center.

Install with the GUI Installer

Once you have the installer, use it to configure the Controller and install and configure the on-premises EUM Server:

1. Start the installer: On Linux: a. From a command prompt, navigate to the directory to which you downloaded the EUM Server installer. b. Change permissions on the downloaded installer script to make it executable, as follows:

chmod 775 euem-64bit-linux.sh

c. Run the script as follows:

./euem-64bit-linux.sh

On Windows: a. Open an elevated command prompt (run as administrator) and navigate to the directory to which you downloaded the EUM Server installer. b. Run the installer:

euem-64bit-windows

2. In the Welcome screen, click Next. The License Agreement page appears. 3. Scroll to the end of the license agreement, accept the license agreement, and click Next to continue. 4. Select the directory in which you want to install the server and click Next. 5. Choose Single Host Installation for the installation mode. In this mode, the installer looks for a Controller on the current host and an Events Service on a separate host. It then installs the EUM Server on the same host as the Controller. Click Next.

Copyright © AppDynamics 2012-2017 Page 131 5.

6. If the Events Service is installed and you plan to use EUM analytics features, select the check box and configure connection settings to the Events Service, including the Events Service API Key. For the Events Service API key, put the value of the app dynamics.es.eum.key setting from the Controller Administration Console, as shown:

If the Events Service is not installed and you choose to enable it later, you can do so by configuring the following settings: On the EUM Server, the properties labeled "Analytics server properties" in the eum.properties file. On the Controller, the eum.es.host Controller Setting in the Administration Console. 7. In the Host information settings, use the hostname for the machine as addressed on the network rather than "localhost" to ensure that the Server can be registered correctly in the Controller configuration. If necessary modify the settings for the default listening ports and heap size allocated for EUM Server. Defaults are pre-populated. Note that the hostname and ports shown here are the location both to which the EUM Agents send their beacons and from which the Controller fetches the processed beacon data. Click Next. 8. Enter the Controller home directory directly or use the Browse button. The directory must be the home directory for the running Controller instance. Click Next. 9. In the Database Information fields, enter the connection settings to the Controller database for the Controller's database root user, which was defined at Controller installation time. Since the database normally runs on the Controller machine, using loc alhost for the Controller Database host name typically works and helps you avoid permissions issues. The installer will create a table for EUM named eum_db in the database. 10. In the EUM Database Password field, enter a password for a new database user for EUM, eum_user and click Next. 11. When the installer is finished, note the information in the Complete page. It contains next steps and information on how to start the server. Click Finish to close the installer.

In some cases, the EUM Server installer fails to update the Controller with the appropriate hostname and port for the EUM Server itself. In this case, you can set these values manually. Log in to the Administration Console and set the Controller property eum.cloud.host to the value of the EUM Server hostname and port.

12. Enter a password for the Credential Key Store that the EUM Server uses to store credentials securely and click Next. With the initial configuration information gathered, the installer completes the setup of the EUM Server. 13. Restart the Controller to complete the installation.

Using the Silent Installer

Copyright © AppDynamics 2012-2017 Page 132 Instead of using the installer in GUI mode, you can use the silent installer to perform an unattended installation. The silent installer takes a response file as a source for the initial configuration settings. It's useful for scripting installation or performing large scale deployments.

Use a Response File for a Single Host Installation

1. Create a file named response.varfile on the machine on which you will run EUM installer with the following:

sys.adminRights$Boolean=false sys.languageId=en sys.installationDir=/AppDynamics/EUM controller.Host=localhost controller.installationDir=/AppDynamics/Controller euem.InstallationMode=demo euem.Host=controller euem.initialHeapXms=1024 euem.maximumHeapXmx=4096 euem.httpPort=7001 euem.httpsPort=7002 mysql.databasePort=3388 mysql.databaseRootUser=root mysql.dbHostName=localhost mysqlRootUserPassword=secret eumDatabasePassword=secret eumDatabaseReEnterPassword=secret keyStorePassword=secret keyStorePasswordReEnter=secret eventsService.isEnabled$Boolean=true eventsService.serverScheme=http eventsService.host=eventsservice_host eventsService.port=9080 eventsService.APIKey=1a234567-1234-1234-4567-ab123456

2. Modify values of the installation parameters based on your own environment and requirements. Particularly ensure that the directory paths and passwords match your environment. 3. Run the installer with the following command:

./euem-64bit-linux.sh -q -varfile response.varfile

On Windows, use:

euem-64bit-windows.bat -q -varfile response.varfile

Install and Host a Custom Geo Server for Browser RUM

On this page:

Download and Install the Geo Server File

Copyright © AppDynamics 2012-2017 Page 133 Set the Location of the Geo Server Create the IP Mapping File Customize File Locations For On-Premises EUM Servers Only: Use geo-ip-mappings.xml Precedence in Resolving Locations Debugging

Related pages:

Host a Geo Server The Browser Geo Dashboard View

By default, end-users' locations are resolved using public geographic databases. You can host an alternate geo server for your countries, regions, and cities instead of using the default geo server hosted by AppDynamics.

You may prefer to host your own geo server because:

you have intranet applications where the public IP address does not provide meaningful location information but the user's private IP does. you have a hybrid application where some users access the application from a private location and some access it from a public one. If a user doesn't come from a specific private IP range mapped by the custom geo server, the system can be set to default to the public geo server.

To host a custom geo server:

1. Download the Geo Server File 2. Set the Location of the Geo Server 3. Create the IP Mapping File

The 4.2 release of the EUM Geo Server adds significant improvements to the installation and functioning of the geo server, reflected in this document. As a result of these changes, the legacy xml mapping file format introduced in 3.6 is no longer supported. You can continue to use the 4.1 version of the geo server if you need to support this format.

Download and Install the Geo Server File

Download the GeoServer.zip file from AppDynamics at https://appdynamics.com/download.

Uncompress the zip to a GeoServer folder with the following structure:

GeoServer schema.xsd <-- schema for geo-ip-mapping.xml configuration geo WEB-INF classes logback.xml <-- configure logging in here ... web.xml <-- other configurations here ... | geo-ip-mappings.xml <-- configure geo ip mapping here ...

To install the geo server, copy the geo folder to the TOMCAT_HOME/webapps of your Tomcat server. Do not deploy the server in the same container as the controller.

The geo server host needs around 2G of memory.

Copyright © AppDynamics 2012-2017 Page 134 Set the Location of the Geo Server

Enter the URL, including the context root, of your hosted geo server in the Geo Server URL field in the Browser RUM configuration screen in the Controller UI. In the following configuration the context root is "/geo".

For more information, see Customize Your Browser RUM Deployment.

If you are using manual injection for your JavaScript agent, you must make sure that the copy of the script that you use is one that you have downloaded after this URL is set.

Create the IP Mapping File

The geo-ip-mappings.xml IP mapping file specifies the locations for which Browser RUM provides geographic data. It maps IP addresses to geographic locations.

Use the sample file in the geo subdirectory as a template. Any modifications at runtime are reloaded without a restart.

This file contains a element for every location to be monitored. The file has the following format.

You can also use IP-range-based mapping instead of subnet-based:

This data is visible in browser snapshots and can be used to filter browser snapshots for specific locations:The , , and elements are required. If the values of and do not correspond to an actual geographic location already defined in the geographic database, map support is not available for the location in the map panel, but Browser RUM metrics are displayed for the location in the grid view of the geographic distribution, end user response time panel, trend graphs, browser distribution panel, and in the Metric Browser. The element can be a string that represents the static location of the end-user. You will notice a element. If there is an IP address that is not covered by your IP mapping file this is the value that is used. To use a public geo server for non-covered IP addresses, see Using a Hybrid Custom-Public Geo Server Setup.

Copyright © AppDynamics 2012-2017 Page 135 The valid names for country and region are those used in the map in the geo . You can hover over a region in the dashboard to see the exact name (including spelling and case) of the region. See The Browser Geo Dashboard View.

Using a Hybrid Custom-Public Geo Server Setup

If you want Browser RUM to evaluate any non-mapped IP address using the public geo server, remove the element. In this case locating any non-mapped IP address is done in the EUM cloud, not locally.

Customize File Locations

You can customize where certain files are stored in the GeoServer directory.

Change Log Location

By default logs are written to TOMCAT_HOME/logs, but you can configure this using TOMCAT_HOME/webapps/geo/WEB-INF/class es/logback.xml. Open the file with a text editor and edit the LOG_HOME property.

Copyright © AppDynamics 2012-2017 Page 136 Change Mapping File Location

By default the geo server looks for geo-ip-mappings.xml in TOMCAT_HOME/webapps/geo/. To change the location, open TOMC AT_HOME/webapps/geo/WEB-INF/web.xml with a text editor and change value for AD_GEO_CONFIG_FILE.

FrontControllerServlet

com.appdynamics.eum.geo.web.FrontControllerServlet AD_GEO_CONFIG_FILE

{path-to-file}/geo-ip-mappings.xml

In previous versions of the geo server, the enclosing tag was a . This has now been changed to an .

For On-Premises EUM Servers Only: Use geo-ip-mappings.xml

If your installation uses an on-premises EUM Server and you have internal browsers from the same network as the Server that you want to identify, instead of setting up a separate custom geo-server, you can choose to simply modify the EUM Server's geo-ip-mapp ings.xml file as described above. The sample is in the bin directory of the Server . The Server automatically reads the file and uses it first to try and resolve the location, before using the MaxMind IP database.

Precedence in Resolving Locations

The custom geo server resolves locations based on the following precedence, from highest to lowest:

An IP address set by customizing the JavaScript agent. For more information, see Set User-Specified Location Programmatically. An explicit query parameter: for example, http://mycompany.com/geo/resolve.js?ip=196.166.2.1 An IP provided using the AD-X-Forwarded-For header An IP provided using the X-Forwarded-For header The remote address of the HTTP request

Debugging

Because this debugging feature has a small performance impact, it should be turned off before putting the geo server into production.

To aid in debugging, the geo server ships with a debugging web interface enabled. You can reach this interface by going to http://h ost:port/geo/debug with a web browser.

The first tab, Configuration, displays the contents of the mapping file currently in use.

Copyright © AppDynamics 2012-2017 Page 137 The second tab, History, shows the last few geo resolutions that have been performed.

By default, the last 20 resolutions are shown, but this can be configured in TOMCAT_HOME/webapps/geo/WEB-INF/web.xml.

FrontControllerServlet

com.appdynamics.eum.geo.web.FrontControllerServlet HISTORY_MAX_COUNT 20

The third tab, Test, can be used to test the mapping file by trying to resolve an arbitrary IP address.

Copyright © AppDynamics 2012-2017 Page 138 When you first navigate to this tab, it shows the geo resolution for your browser's IP address. The form in this tab can be used to try the resolution of another IP address.

Disabling debug

Open TOMCAT_HOME/webapps/geo/WEB-INF/web.xml and set DEBUG_ENABLED to false.

FrontControllerServlet

com.appdynamics.eum.geo.web.FrontControllerServlet DEBUG_ENABLED false

Troubleshoot EUM Server Installation

On this page:

End User Data Does Not Appear in the Controller License Not Installed EUM License Has Not Been Provisioned for an Account Exception When Updating Application Store

Copyright © AppDynamics 2012-2017 Page 139 "Too Many Open Files" Exception Controller Cannot Reach the Events Service

The following sections provide troubleshooting information for the EUM Server installation.

End User Data Does Not Appear in the Controller

If end user data does not appear in the Controller, follow these steps to troubleshoot the installation:

1. Check the Controller logs for errors in attempting to connect to the EUM Server. Also, see if the Controller UI allows you to enable EUM. If so, it's likely that the connection between the Controller and EUM Server is working. 2. Check the logs of the EUM Server, especially /logs/eum-processor.log. In the log, verify that the server started successfully and is receiving beacons from agents. 3. Make sure that the EUM JavaScript Agent is actually injected into the monitored page and that the agent can load the remote JavaScript. 4. Use browser debugging tools to check for JavaScript errors in the monitored page.

License Not Installed

If the installer indicates that it was not able to install the license, or after installation, if the EUM Server fails to start with a license exception, try installing the license manually.

With the Controller running and accessible to the EUM Server machine, install the license manually. Before starting, make sure the license.lic file is at an accessible location on the EUM Server machine. Then install the license as follows:

1. Verify that the JAVA_HOME/bin is in the system PATH variable and points to a Java 1.7 instance. 2. In Windows, open an elevated command prompt (run as administrator). 3. From the command line, navigate to the eum-processor directory under your AppDynamics home. 4. Configure the Controller database password in the EUM Server (for a split installation, perform these steps on the Controller host\ ): a. Open EUM/eum-process/bin/eum.properties for editing. b. Replace the Controller Database password in the following property with appropriate value: onprem.controllerDbP assword. 5. From the eum-processor directory, run the following script: On Linux: ./bin/provision-license On Windows: bin\provision-license.bat

EUM License Has Not Been Provisioned for an Account

Follow the instructions below to provision new EUM licenses for accounts on a multi-tenant Controller UI.

1. Navigate to the Administration page of your on-premise Controller: http(s)://:/controller/admi n.jsp 2. Click Accounts. 3. Select the account name that you want to provision EUM for and click Edit. 4. Scroll down to the End User Monitoring (EUM) panel. 5. From the Browser Real User Monitoring section: a. Copy the EUM license key from your license file into the EUM License Key field. b. Select a license type (EUM Lite or EUM Pro) from the License Type dropdown. c. Enter your allotted Browser RUM units into the Browser RUM Units Licensed field. d. Set overages from the Allow Overages dropdown. 6. Complete the steps above for the Mobile Real User Monitoring section. 7. Click Save.

Exception When Updating Application Store

You may need to tune the database thread pool. The EUM Server ships with a blank c3p0 xml configuration file to help manage this.

Copyright © AppDynamics 2012-2017 Page 140 For information on using c3p0, see the docs here. You should make your changes to /bin/c3p0.xml.

"Too Many Open Files" Exception

Exception messages such as the following indicate insufficient open file descriptor limits on the EUM Server machine:

java.io.IOException: Too many open files

See Install the EUM Server for operating system requirements, including recommended settings for nofile and nproc limits for the EUM Server operating system.

Controller Cannot Reach the Events Service

In the Administration Console, make sure that the Controller setting named eum.es.host is set to the correct connection settings for the Events Service instance, and that the Events Service is properly installed and running at that location. If the Events Service is a cluster with a load balancer in front of it, this should be the VIP of the Events Service as exposed at the load balancer.

For more information, see Connect to the Events Service.

Secure the EUM Server

On this page:

Set Up a Custom Keystore for Production Change the Certificate Keystore Password Change the Credential Keystore Password for the EUM Database Change the EUM Database Password

If you use HTTPS connections in a production (split host) EUM Server installation, install a custom security certificate that uses RSA for the EUM server to use. This page also describes how to change the password for the credential keystore and how to obfuscate a password for the security certificate keystore.

Set Up a Custom Keystore for Production

In demo mode, the EUM Server includes a default self-signed certificate named ssugg.keystore. This certificate is intended for demonstration and light testing only. Do not use self-signed certificates for production systems since they are less secure than Certificate Authority (CA) signed certificates. EUM requires that certificates use RSA as the key algorithm whether they are self-signed or CA-signed.

For Mobile Real User Monitoring, if you use the default or another self-signed certificate on your EUM Server for testing, you may receive the following error: "The certificate for this server is invalid". Ensure that your self-signed certificate is trusted by the simulator or device you use for testing. In most real-world scenarios, a CA signed certificate should be used since a self-signed certificate needs to be explicitly trusted by every device that reports to your EUM processor.

To secure the EUM server with a custom certificate and keystore, generate a new JKS keystore and configure the EUM Server to use it.

These instructions describe how to create a JKS keystore for the EUM Server with a new key-pair or an existing key-pair. You can also configure the EUM server to use an existing JKS keystore.

The commands demonstrate the steps in Linux, but are similar to those used for Windows. Adjust the paths as appropriate for your operating system. Overview of the Steps

The procedure is made up of three parts:

1. Create a new certificate and JKS keystore or import an existing certificate into a JKS keystore. 2. Configure the EUM Server to use the keystore 3. Restart and test

Copyright © AppDynamics 2012-2017 Page 141 Step 1a: Create a Certificate and JKS Keystore

You can create a new public-private key pair and JKS keystore to secure the EUM server with. Alternatively, you can import an existing certificate into a keystore.

1. At a command prompt, navigate to the eum-processor directory:

cd /EUM/eum-processor

2. Create a new keystore with a new unique key pair that uses RSA encryption:

../jre/bin/keytool -genkey -keyalg RSA -validity -alias 'eum-processor' -keystore bin/mycustom.keystore

This creates a new public-private key pair with an alias of "eum-processor". You can use any value you like for the alias.

The "first and last name" required during the installation process becomes the common name (CN) of the certificate. Use the name of the server.

3. Configure the keystore. 4. Specify a password for the keystore. You need to configure this password in the EUM configuration file later. 5. Generate a certificate signing request (CSR):

../jre/bin/keytool -certreq -keystore bin/mycustom.keystore -file /tmp/eum.csr -alias 'eum-processor'

This generates a certificate signing request based on the contents of the alias, in the example "eum-processor". You should send the output file (/tmp/eum.csr, in the example) to a Certificate Authority for signing. After you receive the signed certificate, proceed as follows. 6. Install the certificate for the CA used to sign the .csr file:

../jre/bin/keytool -import -trustcacerts -alias myorg-rootca -keystore bin/mycustom.keystore -file /path/to/CA-cert.txt

This imports your CA's root certificate into the keystore and stores it in an alias called "myorg-rootca" 7. Install the signed server certificate as follows:

../jre/bin/keytool -import -keystore bin/mycustom.keystore -file /path/to/signed-cert.txt -alias 'eum-processor'

This imports your signed certificate over the top of the self-signed certificate in the existing alias, in the example, "eum-processor".

Step 1b: Import an Existing Certificate into a JKS Keystore

If you have an existing certificate that uses RSA, import it into a JKS keystone to use it for EUM.

1.

Copyright © AppDynamics 2012-2017 Page 142 1. Stop the EUM process.

Run the following command in the eum-processor/bin directory:

bin/eum.sh stop

2. If there is an existing custom JKS keystore, back it up:

mv .jks .jks.old

3. Import the private and public key into a PKCS12 keystore:

openssl pkcs12 -inkey -in -export -out keystore.p12

4. Convert the PKCS12 keystore to JKS format:

keytool -importkeystore -srckeystore keystore.p12 -srcstoretype pkcs12 -destkeystore -deststoretype JKS

This creates a JKS keystore with the name specified in the -destkeystore parameter. 5. Specify a password for the keystore. Use this password when you configure EUM to use the new keystore.

Step 2: Configure the EUM Server to Use the New Keystore

1. Place the new keystore file under the bin directory of the eum-processor directory. 2. Open the eum.properties file in the bin directory for editing. 3. Add the keystore filename as the following property:

processorServer.keyStoreFileName=mycustom.keystore

4. Configure the password for the keystore. You can add the password to the file either in plain text or in obfuscated form: For a plain text password, add the password as the value for this property:

processorServer.keyStorePassword=mypassword

For an obfuscated password: a. Get the obfuscated password by running this command in the eum-proccessor directory in a new command terminal:

bin/eum-credential-key. obfuscate -plaintext

b. Copy the output of the command to your clipboard. c. In eum.properties, paste the obfuscated password as the value of the keyStorePassword property:

Copyright © AppDynamics 2012-2017 Page 143 4.

c.

processorServer.keyStorePassword=

d. Add the useObfuscatedKeyStorePassword with value set to true, as shown:

processorServer.useObfuscatedKeyStorePassword=true

5. Save and close the file.

Now test the configuration, as described next.

Step 3: Restart and Test

1. Restart the EUM Server. From the eum-processor directory:

bin/eum.sh stop bin/eum.sh start

2. Verify the new SSL server cert works by opening the following page in a browser:

https://:7002/eumcollector/get-version

If you get a successful response, the configuration succeeded.

Change the Certificate Keystore Password

The previous steps describe how to create a new keystore which is likely to have a new password. To change the keystore password without creating a new keystore, follow these steps:

1. From the eum-processor directory in the EUM Server home, run the keytool command for creating a new password:

../jre/bin/keytool -storepasswd -keystore bin/ssugg.keystore

The sample command creates the password for the default demo keystore, ssugg.keystore. In your command, use the name of your own keystore as the -keystore value. 2. Enter the existing password and new password when prompted by keytool. 3. Get the obfuscated key by running this command in the eum-proccessor directory:

bin/eum-credential-key. obfuscate -plaintext

4. Copy the output of the previous command to your clipboard. 5. In bin/eum.properties, paste the obfuscated password as the value of the keyStorePassword property, and add the useObfuscatedKeyStorePassword with value set to true, as shown:

Copyright © AppDynamics 2012-2017 Page 144 5.

processorServer.keyStorePassword=

6. If you did not previously use an obfuscated password, add the following property:

processorServer.useObfuscatedKeyStorePassword=true

7. Save and close the file. 8. Restart the EUM Server.

Change the Credential Keystore Password for the EUM Database

When installing the EUM Server, you needed to specify a password to use to secure the credential keystore for the EUM Server. After installation, you can change the password for the credential keystore, as described here. You may need to do this, for example, to comply with your organization's password rotation policy.

The following instructions describe how to change the key. Note that completing these procedures requires a restart of the EUM Server.

To change the existing EUM server credential key:

1. Generate a credential store with the new key using the following command: On Linux:

bin/eum-credential-key.sh generate_ks -storepass

On Windows:

bin\eum-credential-key.bat generate_ks -storepass

This creates and initializes a new credential file, bin/credential.scs. 2. Reencrypt the database password using the new credential store. On Linux:

bin/eum-credential-key.sh encrypt -storepass -plaintext

On Windows:

bin\eum-credential-key.bat encrypt -storepass -plaintext

The command prints out the encrypted form of the DB_password value you entered. 3. Copy the output from the previous command to your clipboard. 4.

Copyright © AppDynamics 2012-2017 Page 145 4. Open bin/eum.properties for editing, and replace the value of the onprem.dbPassword setting with the new encrypted password you copied to your clipboard. 5. Obfuscate the new credential key as follows: On Linux:

bin/eum-credential-key.sh obfuscate -plaintext

On Window:

bin\eum-credential-key.bat obfuscate -plaintext

6. Copy the output of the previous command to your clipboard and in eum.properties replace the value of onprem.credenti alKey with the value from your clipboard. 7. Save and close the properties file. 8. Restart the EUM server.

Change the EUM Database Password

At EUM Server installation time, you set a password for the EUM database. You can change it later as follows:

1. Encrypt the new database password using the credential key which enter during installation: On Linux:

bin/eum-credential-key.sh encrypt -storepass -plaintext

On Windows:

bin\eum-credential-key.bat encrypt -storepass -plaintext

The command prints out the encrypted form of the DB_password value you entered. 2. Copy the output from the previous command to your clipboard. 3. Open bin/eum.properties for editing and replace the value of the onprem.dbPassword setting with the new encrypted password you copied to your clipboard. 4. Save and close the properties file. 5. Restart the EUM server.

Configure the EUM Server

On this page:

Configuring the On-Premises Data Store Expiration Updating the EUM Server's Geo Server Switch to Neustar Database Turn On Access Logs EUM Server Configuration File

Copyright © AppDynamics 2012-2017 Page 146 This page describes administration and advanced configuration options for the EUM Server.

Configuring the On-Premises Data Store Expiration

As part of the Analytics functionality used by EUM, the Server stores some data, like crash reports, in a local blob store. To make sure the Controller doesn't request data from the blob store that is no longer there, if you wish to change the default setting of 30 days, you need to set make sure the data expiration times in the Server itself and in the Controller are compatible. The value in the EUM Server should be at least one day greater than the value in the Controller.

Setting Data Store Expiration in the EUM Server

1. Open $APPDYNAMICS_HOME/eum-processor/bin/eum.properties with a text editor. 2. Open $APPDYNAMICS_HOME/eum-processor/bin/eum.sample.properties with a text editor. 3. Copy the onprem.crashReportExpirationDays property from the sample file into eum.properties and set it to whatever value you wish. The unit is days. 4. Restart the Server.

Setting Data Store Expiration in the Controller

1. Log in to the Controller administration console using the root account password. See Access the Administration Console.

http://:/controller/admin.jsp

Use the root account password to access the Admin console when the Controller is installed in single- or multi-tenant mode. 2. Click Controller Settings. 3. Change the data expiration property events.retention.period to a value that is least one day less than the value you set in eum.properties. The unit for this property is hours. 4. Save your change.

Updating the EUM Server's Geo Server

The on-premises Server ships with MaxMind's IP geo lite database for managing geo-location of IP addresses. The first Tuesday of every month MaxMind updates the database. To keep your version of the database current, you need to update your copy of the database manually.

1. Go to the MaxMind web site, http://dev.maxmind.com/geoip/legacy/geolite/. 2. Download a binary copy of GeoLite City. 3. Unzip the downloaded file to get GeoLiteCity.dat. 4. Rename the file to GeoIPCity.dat. 5. Stop the Processor. 6. Copy the new GeoIPCity.dat to /bin, replacing the old copy. 7. Restart the Server.

Switch to Neustar Database

By default in the EUM Server, the resolution of geographic regions based on the IP addresses of the requests is based on the Maxmind IP data file. Although this works well in many cases, in some situations the more highly granular database offered by Neustar is preferable. Please note, however, that the Neustar db requires more memory.

To switch your geo-resolver from Maxmind to Neustar:

1. Download the Neustar data file (neustar.dat) from https://www.appdynamics.com/download. 2. Place neustar.dat in the bin sub-directory of eum-processor directory. 3. Open eum.properties with a text editor and add the following properties:

Copyright © AppDynamics 2012-2017 Page 147 3.

beaconReader.geoDataType=neustar beaconReader.geoDataFile=bin/neustar.dat

4. Increase the JVM max memory (-Xmx) for the Server by at least 800M. 5. Restart the EUM Server.

Turn On Access Logs

By default server access logging for the EUM Server's underlying application server is turned off. To turn it on, open /eum -processor/conf/local-eum-processor.yml with a text editor and find the following section under the server entry:

requestLog: appenders: []

Add the following information:

requestLog: timeZone: UTC appenders: - type: file archive: true currentLogFilename: ../logs/access.log archivedLogFilenamePattern: ../log/accedd-%d.log.gz

Save the file and restart the EUM Server.

EUM Server Configuration File

You can configure the EUM Server by setting properties in the file /bin/eum.properties. You are recommended to copy the sample file /bin/eum.sample.properties to /bin/eum.properties, modify the settings to fit your needs, and then restart the EUM Server so that the new settings are applied.

The table below lists and describes the supported EUM properties, lists defaults, and specifies whether the property is required. The values for the database properties must conform with the MySQL syntax rules given in Schema Object Names.

EUM Property Default Required? Description

onprem.dbHost dbHost Yes The name of the database host.

onprem.dbPort 3388 Yes The port to the database host.

onprem.dbSchema eum_db Yes The name of the EUM database.

onprem.dbUser eum_user Yes The user name for the EUM database.

onprem.dbPassword N/A Yes The user password to the EUM database.

onprem.controllerDbScheme controller Yes The name of the Controller database. The Controller database has the accounts table that stores the EUM licenses.

onprem.controllerDbUser controller Yes The user name of the Controller database. This user has full permission to the Controller database.

onprem.controllerDbPassword N/A Yes The user password for the Controller database.

Copyright © AppDynamics 2012-2017 Page 148 onprem.fileStoreRoot ../store No The path to the directory storing EUM data such as snapshots. onprem.crashReportExpirationDays 365 No The number of days that crash reports are retained. onprem.resourceSnapshotExpirationDays 15 No The number of days that resource snapshots are retained. onprem.resourceSnapshotAllowance 20 * 1024 * 1024 * No The maximum disk space allotted for storing 1024 resource snapshots. The default maximum disk space is 20 GB or 21474836480 bytes. You must specify the value as an integer or a mathematical expression. processorServer.httpPort 7001 No The HTTP port to the EUM Processor. The EUM Processor runs in one process containing the collector, aggregator, crash-processor, and monitor services. processorServer.httpsPort 7002 No The HTTPS port to the EUM Processor. processorServer.httpsProduction true No The flag for turning enabling (true) or disabling HTTPS to the EUM Processor. processorServer.keyStorePassword 1wnl1u No The password to the Key Store for the EUM Processor. processorServer.keyStoreFileName bin/ssugg.keystore No The path to the file that stores the password to the Key Store for the EUM Processor. processorServer.collectorHttpPort 9001 No The HTTP port of the EUM Collector. By default, the EUM Collector shares the same port as the EUM Processor, but you can configure the port to be different. The EUM Collector receives the metrics sent from the JavaScript agent. processorServer.collectorHttpsPort 9002 No The HTTPS port of the EUM Collector. analytics.enabled true Yes The flag for enabling or disabling the Analytics Server. analytics.serverScheme http No The network protocol for connecting to the Analytics Server. It is only required with analytics.enable d=true. analytics.serverHost events.service.hostname No The hostname of the Analytics Server. It is only required with analytics.enabled=true. analytics.port 9080 No The port to the Analytics Server. It is only required when analytics.enabled=true. analytics.accountAccessKey access-key No The access key for connecting to the Analytics Server. It is only required with analytics.enable d=true. analytics.eventTypeLifeSpan.0.eventType BrowserRecord No The type of event to be saved.

The following values are supported:

BrowserRecord MobileSnapshot

If this property is set, you must also set analytics .eventTypeLifeSpan.0.lifeSpan. analytics.eventTypeLifeSpan.0.lifeSpan 15 No The number of days to retain the event records specified by analytics.eventTypeLifeSpan.0 .eventType. If this property is set, you must set an alytics.eventTypeLifeSpan.0.lifeSpan.

Copyright © AppDynamics 2012-2017 Page 149 analytics.eventTypeLifeSpan.1.eventType MobileSnapshot No The type of event to be saved.

The following values are supported:

BrowserRecord MobileSnapshot

If this property is set, you must also set analytics .eventTypeLifeSpan.0.lifeSpan.

analytics.eventTypeLifeSpan.1.lifeSpan 15 No The number of days to retain the event records specified by analytics.eventTypeLifeSpan.1 .eventType.

onprem.mobileAppBuildTimeSeriesRequestCountRollupDays 7 No The EUM Collector searches for the dSYM file in the beacon traffic for the configured number of days. If the dSYM file is not present during the configured time frame, a warning message is displayed in the Controller UI.

onprem.maxNumberOfMobileBuildsWithoutDsym 10 No The maximum number of visible missing dSYM files in the Controller UI.

collection.accessControlAllowOrigins.0 * No You can use this property to limit the cross original resource sharing (CORS). By default, the EUM Collector responds with the following: Access-Con trol-Allow-Origin: *

Upgrade the EUM Server

On this page:

Order of On-Premises Upgrades Verify EUM Server Configuration after a Controller Upgrade Using a 4.2 EUM Server with an Earlier Version of the Controller

This page describes how to upgrade an EUM Server to the latest version. Normally, this would be done alongside an upgrade to the other platform components, the Controller and Events Service.

Order of On-Premises Upgrades

Perform the upgrade in the following order if you did not use the Platform Administration Application to install the Events Service:

1. Upgrade the Events Service 2. Upgrade the EUM Server 3. Upgrade the Controller

Perform the upgrade in the following order if you used the Platform Administration Application to install the Events Service:

1. Shutdown the EUM Server 2. Upgrade the Controller 3. Upgrade the Events Service with the application. 4. Upgrade the EUM Server.

Note that upgrading platforms that use the Platform Administration Application results in downtime of the EUM Server. Upgrade the EUM Server

To upgrade the EUM Server, follow these steps:

1. Stop the EUM Server. 2. On the Server host, back up the old version by copying the directory to a safe location. 3. Run the new installer on the Controller host machine and select option 2. 4.

Copyright © AppDynamics 2012-2017 Page 150 4. Run the new installer on the EUM host machine and select option 3. 5. Follow the steps to complete the upgrade.

Verify EUM Server Configuration after a Controller Upgrade

In version 4.2 of the AppDynamics Application Intelligence Platform, configuration settings previously in the domain.xml file in the Controller application server configuration have been moved to be Controller Settings, which are accessible in the Controller Administration Console.

The Controller upgrade process takes care of the settings migration for you, but you may wish to verify the upgrade configuration as described here.

To verify EUM settings in the Controller configuration:

1. From a terminal, navigate to the following directory on the Controller machine: /appserver/glassfish/domains/domain1/config 2. Open the backup of the domain.xml file generated during the upgrade process. The backup file is located in the same directory and is named with the following pattern, where yyyyMMdd_HHmmss is the time at which the backup was created:

domain.xml_yyyyMMdd_HHmmss.bak

3. Compare the JVM options indicated below from the backup domain file to the equivalent Controller Settings in the Administration Console:

JVM Options from domain.xml.bak Equivalent Global Controller Setting

appdynamics.controller.eum.cloud.hostName eum.cloud.host

appdynamics.controller.eum.beacon.hostName eum.beacon.host

appdynamics.controller.eum.beacon.https.hostName eum.beacon.https.host

appdynamics.controller.eum.analytics.service.hostName eum.es.host

In the file, the JVM options appear as -D system property arguments within jvm-options tags. For example:

-Dappdynamics.controller.eum.cloud.hostName=http:/ /controller.example.com:7001

4. If you notice any discrepancies, ensure that the Controller Setting has the correct value.

Using a 4.2 EUM Server with an Earlier Version of the Controller

As described in Verify EUM Server Configuration after a Controller Upgrade, EUM-related settings previously configured in the domain configuration file for the Controller are now in the Administration Console. If you are using a 4.2 version of the EUM Server with an earlier version of the Controller, after installing the 4.2 EUM Server, you may need to configure settings in the Controller domain configuration (domain.xml) directly.

Use the modifyJvmOptions tool on the Controller machine to set up the EUM Server hostname and port. For example:

modifyJvmOptions add -Dappdynamics.controller.eum.cloud.hostName=myeumserverhost.example. com:myeumserverport

If you upgrade the Controller later, verify the upgrade by logging in to the Administration Console and verifying that the Controller

Copyright © AppDynamics 2012-2017 Page 151 following setting has the value of the EUM Server hostname and port eum.cloud.host.

Administer the Controller

This section contains topics on installing and administering the Controller. Deployment topics provide information on how to install the Controller in the data center.

Certain administration tasks require access to the command line on the Controller machine. AppDynamics provides command line tools for common operations, such as starting and stopping the Controller and its individual services. In addition, some tasks involve using the administration interface for the underlying Glassfish application server, either the web interface or the command line tool.

controller.bat/sh: Run as admin. Use to start and stop the Controller or its individual services, add Java startup options, and so on. To see all options, run controller.sh from the command line with no parameters. admin.jsp: Access basic operational settings for the AppDynamics installation, including data retention settings, tenancy mode, and more. Glassfish administration tool: Use the Glassfish admin tool, asadmin, to confiure settings in the underlying application server.

The topics in this section describe how to use the tools, along with many other administration tasks in the deployment.

Administering Users

The Users and Roles topic describes how to add locally authenticated user accounts in the AppDynamics Controller, and introduces the topic of roles and groups.

The following topics provide additional information on advanced user administration features, such as integrating the Controller with external authentication providers, along with information on the administrative accounts in the system.

Administrative Users Configure Authentication Using LDAP Configure Authentication Using SAML

Modifying User Session Timeout

The Controller logs users out of Controller UI sessions after 60 minutes of inactivity by default. For an on-premise Controller, it's possible to modify the default timeout value, as follows:

1. As the AppDynamics root user, log in to the Administration Console. 2. Find and set the values for these properties: http.session.inactive.timeout: The amount of time without a client request to the Controller after which the user session times out and the user will need to log in again to continue. The default is 3600 seconds (60 minutes). ui.inactivity.timeout: The amount of time without user activity in the Controller UI after which the user session times out and the user will need to log in again to continue. The default is -1 (disabled).

Setting a System Notification Message

You can have a message appear as a dialog box after a successful login. The user will need to close the dialog box to continue. To set a login message, enter the text as the value of the system.user.notification.message Controller setting in the Administration Console. Clearing the text for the setting suppresses the appearance of the dialog.

Administrative Users

On this page:

About the root User and Account Owners Change the Controller root User Password Change the Glassfish admin User Password Reset the Database root User Password

Copyright © AppDynamics 2012-2017 Page 152

This topic describes the root user and other types of administrative users in AppDynamics.

About the root User and Account Owners

The root user is a built-in Controller user with global administrator privileges in the Controller environment. Only the root user can access the System Administration Console, the web page where you can create and manage accounts in multi-tenant Controllers and configure global Controller settings in both single- or multi-tenant Controllers.

The root user can be thought of as a superuser for the Controller. Unlike other types of users, you cannot remove the root user account or create other superuser accounts in the Controller. The password for the root user is first set at installation time, but you can change it after installation by following the steps below.

While the root user has global administrative privileges, account administrators act as administrators only within individual accounts in a multi-tenant Controller. It's typically the role of the root user to create accounts and an initial administrator for the account, and the role of each account administrator to create additional users within the account.

For information, see Roles and Permissions and Users and Groups.

Change the Controller root User Password

You can change the root user password from the AppDynamics administration console page.

To change the root user password

1. In a browser, log in to the administration console as described in Access the Administration Console. 2. Click the Settings icon from the right side of the menu bar and then My Settings. 3. Click the Edit link and then the Change Password link. 4. Enter the new password for the root user in the New Password and Repeat New Password fields. 5. Click Save.

Logging in to the administration console as described here requires you to have the root user password. If you do not have the root user password and need to reset it, see Reset Root User Password.

Change the Glassfish admin User Password

The Controller uses the built-in administrator account in the underlying Glassfish application server. To change the password for this user, you need to change it in two places, in Glassfish and in the password file used by the Controller, as follows.

To change the Glassfish admin user password

1. In a browser, log into the Glassfish admin console as the administrator as described in Access the Administration Console. 2. Click Domain from the navigation tree on the left. 3. Open the Administrator Password tab. 4. Type and retype the new password in the password fields, and click Save. 5. Change the AS_ADMIN_PASSWORD value in the .passwordfile file located in the Controller home directory to match the new password.

The password change takes immediate effect.

Reset the Database root User Password

If you lose the password for the Controller database, you must stop the App Server and Controller database before you can reset the password.

To reset the database root user password

1. Open the command line on the machine where the Controller runs. 2. Stop the App Server and the database with the following command: controller.sh/bat stop. 3. Start the database in insecure mode with the following command: controller.sh/bat start-db insecure.

Copyright © AppDynamics 2012-2017 Page 153 3.

The insecure option starts the database without password requirements. Use this option only to reset the password for the database. The option is similar to starting MySQL with the --skip-grant-tables option.

4. Log in to the database with the following command: controller.sh/bat login-db insecure. 5. Use MySQL to run the following commands: a. Specify the Controller database with the following command: use mysql; b. Reload the MySQL grant tables with the following command: FLUSH PRIVILEGES; c. Configure the new password for the root user with the following command: update mysql.user set password=password('') where user like 'root%'; d. Reload the MySQL grant tables with the following command: FLUSH PRIVILEGES; e. Exit MySQL with the following command: quit 6. Stop the database with the following command: controller.sh/bat stop-db. 7. Start the App Server and Glassfish with the following command: controller.sh/bat start. Reset Root User Password

Related pages:

Administrative Users Administer the Controller

If you have lost the AppDynamics root user password for your installation and need to reset it, follow these steps:

1. From the command line, change to the Controller's bin directory. For example, on Linux:

cd /bin

2. Use the following script to log in to the Controller database of the Controller; For Windows: controller.bat login-db For Linux: sh controller.sh login-db You should see a MySQL prompt. 3. At the MySQL prompt, enter the following SQL command to get root user details:

select * from user where name='root' \G;

4. Use the following SQL command to change the password:

update user set encrypted_password = sha1('') where name = 'root';

The hash for the password will be upgraded to PBKDF2 when you log in.

For information on setting the database root user password, see Controller Data and Backups.

Configure Authentication Using LDAP

On this page:

Preparing the LDAP Directory for AppDynamics Integration Using Paged Results for Large Result Sets LDAP Authentication with a SaaS AppDynamics Controller What Happens if the LDAP Server Becomes Unavailable What Happens if a User is Not Found in the LDAP Directory Before Starting

Copyright © AppDynamics 2012-2017 Page 154 Configuring LDAP Authentication Configuring the LDAP Cache Synchronization Frequency

This topic describes how to integrate AppDynamics Controller with LDAP directory servers.

LDAPv3 Support

You can delegate Controller UI authentication and authorization to external directory servers that comply with LDAP (Lightweight Directory Access Protocol) version 3.

While the Controller should be able to work with any LDAPv3-compliant server, it has been verified against these LDAP products:

Microsoft Active Directory for Windows Server 2008 SP2+ OpenLDAP, 2.4+

To configure LDAP authentication in the Controller, you need to configure connection settings to the LDAP server and the queries that return user or group data. By mapping LDAP groups to roles, you can provision permissions in the AppDynamics Controller based on LDAP groups.

Preparing the LDAP Directory for AppDynamics Integration

To use an LDAP authentication provider, your AppDynamics Controller needs to be able to connect to the external LDAP server. A good practice is to create a user account in LDAP specifically for the Controller to use to authenticate itself to the server and run the queries. The Controller user only needs to have search privileges in LDAP.

While you can map existing LDAP group definitions to roles in AppDynamics, your existing groups may not correspond directly to roles in AppDynamics. The easiest way to map LDAP groups to Controller roles is to create a group in LDAP for each role you want mapped in AppDynamics. This gives you a manageable, 1-to-1 correspondence between your LDAP groups and AppDynamics roles.

For example, a possible LDAP group scheme for mapping in AppDynamics would be:

AppDynamics-AppA-ReadOnly AppDynamics-AppA-Admins AppDynamics-AppA-DashboardViewers AppDynamics-AppB-ReadOnly AppDynamics-AppB-Admins AppDynamics-AppB-DashboardViewers

The sample group names imply having custom roles in AppDynamics targeted to specific applications, AppA and AppB.

Naming the groups with a common prefix, as the "AppDynamics-" prefix in our sample, allows you to use a relatively simple LDAP group filter. A group filter for the sample groups could be:

(&(objectClass=group)(cn=AppDynamics-*))

Using Paged Results for Large Result Sets

LDAP servers are sometimes configured to limit the number of entries it can return in a query response. If the results of your user or group query exceed that limit, AppDynamics reports a max_results_exceeded error.

To avoid this error, first try to refine your query filter to produce a smaller result set. Of course, the results still need to include the users who will need to access the AppDynamics UI.

If your LDAP server supports it, you can also enable paged results in the Controller LDAP configuration. With paged results, the LDAP server divides the result set into separately transmitted blocks.

The paged results feature applies to the behind-the-scenes interaction between the AppDynamics Controller and the backend LDAP server. It does not affect the UI view of the data.

LDAP Authentication with a SaaS AppDynamics Controller

Depending on your organization's security policies, it may not be possible to use LDAP authentication with the SaaS AppDynamics

Copyright © AppDynamics 2012-2017 Page 155 Controller, since doing so requires opening your firewall to permit Controller access to your corporate LDAP server.

However, if you do want to enable LDAP authentication with SaaS AppDynamics Controller, you will need to permit access through the firewall for the IP range of 69.27.44.0/24, the IP address range assigned to AppDynamics SaaS Controllers. The firewall rule should permit incoming LDAP requests from the Controller at the LDAP port you configure.

What Happens if the LDAP Server Becomes Unavailable

If you have configured the controller to use LDAP for authentication and the LDAP server becomes unavailable for any reason, AppDynamics falls back to local user authentication. Given this possibility, you should provision local user accounts in AppDynamics for users who need to access AppDynamics in the event that the LDAP server becomes unavailable.

What Happens if a User is Not Found in the LDAP Directory

In this case the authentication failure is logged as a warning. The user, whether it is a regular controller user or a REST client user, may still be authenticated through local authentication.

Before Starting

To perform LDAP configuration you must have:

An LDAP server. There is a one-to-one correspondence between an AppDynamics account and an LDAP server. An account on an AppDynamics SaaS or on-premise Controller Account administrator privileges on the AppDynamics Controller, as described in Administrative Users . Network connectivity between your LDAP server and the Controller. If using a SaaS Controller, the LDAP server may not be accessible to the Controller without enabling access through your network firewall. See LDAP Authentication with a SaaS AppDynamics Controller.

Configuring LDAP Authentication

At a high level, the steps for setting up LDAP authentication include:

Configure the connection to the LDAP server. Configure and test the LDAP query that returns users to be provisioned in the AppDynamics Controller. Configure the LDAP query that returns the LDAP groups to be mapped to AppDynamics roles. Map the users or groups to roles in AppDynamics.

Configure the Connection to the LDAP Server

As an administrator or account owner in the Controller UI, you can configure LDAP authentication from the Authentication Provider ta b under Settings > Administration.

If the user or group query that you need to use will return more entries than permitted by the LDAP server and the server support paged results, configure paged results as follows:

Enable Paging: Check this option to have the Controller request paged results from the server when submitting user or group queries. Page Size: Enter the number of entries per round-trip from the AppDynamics Controller to the LDAP server. The default is 500.

The page size should be the total number of entries to be returned divided by the number of round trips between the LDAP server and the Controller that are tolerable. For example, if you expect to receive 1200 results in a query and you can tolerate a maximum of two round trips, set the page size to 600 (1200 /2). See Using paged results for large result sets for more information.

Configure the LDAP connection settings:

Copyright © AppDynamics 2012-2017 Page 156 Host: Required address of the LDAP server. Port: Port on which the LDAP server listens. Default is 636 for an SSL connection and 389 if not using SSL. Required. Use SSL: Enabled by default to use a secure connection to the LDAP server. Clear if not using SSL. Enable Referrals: Enabled by default to support LDAP referrals. A referral is when an LDAP server forwards an LDAP client request to another LDAP server. Each referral event is referred to as a hop. Maximum Referral Hops: The maximum number of referrals that AppDynamics will follow in a sequence of referrals. Default is 5. Bind DN: Distinguished Name of the user on the LDAP Server on whose behalf the AppDynamics application searches. Required. Password: Password of the user on the LDAP server. Required. Your settings should look something like this:

Configure Users

In the LDAP configuration page, configure information to find LDAP users:

Base DN: Location in the LDAP tree to begin recursively searching for users. Required. Filter: Optional LDAP search string that filters the items matched from the base DN. See RFC22 54 for information about LDAP search filters. Login Attribute: The LDAP field that corresponds to the username users will enter when logging in to the AppDynamics UI. The default is "uid". For Active Directory, this would typically be “sAMAccountName". Display Name Attribute: The LDAP field to be used as the user's display name. Group Membership Attribute: Optional user group membership field. Recommended for faster retrieval. Email Attribute: Optional user email address.

Copyright © AppDynamics 2012-2017 Page 157 The Test Query button checks the connection. If successful, a screen displays the first few users returned by the query. (The test does not return the entire result set if the result set is large.)

Configure Groups

Optionally, you can map LDAP groups to user roles in the AppDynamics Controller. To do this, you need to set up the LDAP query that returns the LDAP groups to map, as follows.

Base DN: Location in the LDAP tree to begin recursively searching for groups. Required. Enable Nested Groups: Option to include nested LDAP groups to a depth of 10. Filter: Optional LDAP search string that filters the items matched from the base DN. See RFC22 54 for information about LDAP search filters. Name Attribute: The LDAP field that contains the name of the group. Default is "cn". Required. Description Attribute: The LDAP field that contains a description of the group. Optional. User Membership Attribute: Identifies members of the groups. Optional. Referenced User Attribute: Optional child attribute of the User Membership Attribute. Disabled if the parent is empty. Identifies property of the user that the user membership attribute contains.

The Test Query. button checks the connection. If successful, the first few groups returned by the query are shown.

You can now assign permissions in the AppDynamics Controller to users or groups.

Copyright © AppDynamics 2012-2017 Page 158 Assign AppDynamics Permissions to an LDAP User

1. In the Security Configuration window, click the Users tab. If LDAP is enabled and correctly configured, the AppDynamics Controller fetches the user names from the LDAP server. 2. Select the name of the user to whom you want to assign permissions. 3. In the Roles panel, check the roles that you want to assign to this user. You can assign multiple roles to a user.

4. Click Save.

Assign AppDynamics Permissions to an LDAP Group

LDAP Group configuration is optional.

1. In the Security Configuration window, click the Groups tab. If LDAP is enabled and correctly configured, AppDynamics fetches the group names of users in LDAP. 2. Select the name of the group to which you want to assign permissions. 3. In the Roles panel, check the roles that you want to assign to this group. You can assign multiple roles to a group. 4. Click Save.

Configuring the LDAP Cache Synchronization Frequency

The Controller keeps information about LDAP users and groups in a local cache. It regularly connects to the LDAP server to synchronize its cache with the LDAP server.

The Controller caches information about users and group membership. It does not cache user passwords. Accordingly, the Controller authenticates the user credentials against the LDAP server at the start of every user session.

If a user account is removed from LDAP, the change is reflected immediately; that is, the user will not be able to log in to the Controller UI from that point. However, if the user has an existing session in the Controller UI, that session continues until the user logs out or the session expires.

If the user's access to the Controller is based on group membership and the user is removed from the group but maintains an account in the LDAP server, the user will be able to log in to the Controller until the next time synchronization with the LDAP server occurs. By the default synchronization frequency setting, this ability to access the Controller UI could continue for up to an hour.

Copyright © AppDynamics 2012-2017 Page 159 You can modify the default synchronization frequency of one hour as described in the following procedures.

Configure the LDAP Synchronization Frequency

1. Stop the Controller application server: On Linux, run:

./controller.sh stop-appserver

On Windows, run this command from an elevated command prompt (which you can open by right-clicking on the Command Prompt icon in the Windows Start menu and choosing Run as administrator):

controller.bat stop-appserver

2. Open the /appserver/glassfish/domains/domain1/config/domain.xml file for editing. 3. In the element, add a system property named appdynamics.ldap.sync.frequency with the desired synchronization frequency in milliseconds. For example, to have the Controller synchronize to the LDAP server every 15 minutes (900000 milliseconds), add:

-Dappdynamics.ldap.sync.frequency=900000

The default is 3600000 milliseconds (1 hour). 4. Save the file. 5. Restart the Controller app server: On Linux, run:

./controller.sh start-appserver

On Windows, run the following in an elevated command prompt:

controller.bat start-appserver

Configure Authentication Using SAML

On this page:

How SAML Authentication Works About Roles and SAML Groups Enabling SAML Authentication Configuring Default Permissions Mapping SAML Group to Roles

The AppDynamics Controller can use an external SAML (Security Assertion Markup Language) identity provider to authenticate and authorize users. This topic describes how to set up and administer SAML authentication.

How SAML Authentication Works

Copyright © AppDynamics 2012-2017 Page 160 With SAML authentication enabled, the Controller UI redirects credentials entered in the log in page to the external SAML identity provider. To be able to log in to the Controller UI, the user needs to be able to access both the Controller and the identity provider service by network from their computer.

User privileges in the Controller UI are governed by roles (see Roles and Permissions for more information on roles). You can configure the Controller to assign roles to authenticated users based on group attributes in their SAML responses. See the following section, Abo ut Roles and SAML Groups, for more information on mapping SAML attribute to roles.

With SAML authentication enabled, the Controller log in page allows users to log in with internal AppDynamics user account credentials using the local login option. The Use Local Login link on the log in page enables users to log in with internal AppDynamics accounts. This is useful, for example, if you haven't mapped a particular role to SAML attributes, such as AppDynamics administrators.

The AppDynamics SAML integration conforms to the Security Assertion Markup Language 2.0 (SAML 2.0) specification, so you can use any SAML 2.0-compliant identity provider.

ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST"

AssertionConsumerServiceURL="http://{appdynamics_controller_url}/con troller/saml-auth?accountName={account_name}”> http://{appdynami cs_controller_url}/controller

About Roles and SAML Groups

The Controller can assign roles to SAML-authenticated users using one of the following mechanisms:

SAML group attributes: You can map SAML group membership attributes to roles in AppDynamics. Using this method, each time the user authenticates, the Controller checks the SAML assertion and updates the role assignment if needed. Internal AppDynamics account roles: If a SAML-authenticated user has the same username as an AppDynamics internal user account and the SAML assertion does not contain mapped SAML group attributes, the Controller gives the user the roles for the internal AppDynamics account. Default role: If there are no SAML group attributes in a user's identity assertion, the authenticated user is assigned the SAML default role upon first log in. An AppDynamics administrator can verify and adjust the roles for users manually in AppDynamics once the accounts are created for those users. Manually adjusted roles are preserved across subsequent log-ins.

To use SAML group attributes as the basis for AppDynamics role assignments, configure the SAML group attribute value mapping. If using internal account role associations, you can simply enable SAML authentication and configure basic SAML authentication settings.

The default role is not associated with any AppDynamics roles out-of-the-box, so you need to configure the default role to use it.

Enabling SAML Authentication

The following steps assume that you have an account with a supported identity provider. You need to know the SAML Login URL and have the x.509 certificate supplied by your identity provider. You should also be familiar with the format of the SAML identify response from your SAML provider.

Copyright © AppDynamics 2012-2017 Page 161 Enable SAML authentication as follows:

1. As a user with AppDynamics account administrator privileges in the Controller UI, go to Settings > Administration. 2. Click on the Authentication Provider tab and select SAML as the authentication provider. 3. Enter the following SAML Configuration settings: Login URL: The SAML Login URL where the Controller will route Service Provider (SP)-initiated login requests. This is required. Logout URL: The URL where the Controller will redirect users after they log out. If you do not specify a logout URL, users will get the AppDynamics login screen when they log out. Certificate: The x.509 certificate from your identity provider configuration. Paste the certificate between the BEGIN CERTIFICATE and END CERTIFICATE delimiters. Avoid duplicating "BEGIN CERTIFICATE" and "END CERTIFICATE" delimiters from the source certificate itself. 4. In the SAML Attribute Mappings settings, specify how SAML-authenticated users are identified in the AppDynamics Controller: Username Attribute: Unique identifier for the user in the SAML response. This value corresponds to the AppDynamics username field, so the value must be unique among all SAML users in the Controller account. Given the sample response below, the value for this setting would be User.OpenIDName. Display Name Attribute: The informal name for the user corresponding to the AppDynamics Name field. Given the sample response, this value would be User.fullName. Email Attribute: The user's email address, corresponding to AppDynamics email field. Given the sample response, this value would be User.email. Account Name: If the Controller is in multi-tenant mode, the SAML response must contain a custom SAML attribute accountName that indicates the user's AppDynamics account name. You cannot change this field mapping in the Controller.

For example, given the following sample response

... adynamo ... [email protected] Mr. Ajay Dynamo ...

5. To map SAML group attributes to AppDynamics roles, configure the SAML Group Mappings settings. The settings you use depends on the structure of the SAML group attribute in the response, as described in Map SAML Groups to AppDynamics Roles. If you are using internal AppDynamics accounts to map user roles, you can skip this step. 6. Optionally specify a master SAML Access Attribute included in the SAML response from your provider. When enabled, the

Copyright © AppDynamics 2012-2017 Page 162 6. Controller only grants access to users when the SAML assertion contains a matching value for the attribute. In the sample response snippet below, AccessControl.

{access}

7. Click Save to apply your changes. The Controller immediately starts using the SAML identity provider you configured for user authentication.

Configuring Default Permissions

Instead of mapping SAML attributes to roles, you can allow all users to get a default role with the permissions you specify.

To use default permissions, edit the Default Permissions settings in the Mapping of Group to Roles list. In the Default Group Mapping dialog, choose the AppDynamics roles that all authenticated users get.

Mapping SAML Group to Roles

If the identity assertion from the SAML provider includes group attributes that correspond to AppDynamics roles, you can configure mappings between those attributes and roles. The SAML Group Mappings settings in the SAML configuration page control the mappings, as described here.

To configure SAML attribute to role mapping:

1. In the SAML Group Attribute Name field, enter the Name attribute value that identifies the SAML Attribute element with group affiliations for the user. For example, given the following response snippet, use SAML groups-Membership in the SAML Gro up Attribute Name field.

{group1};{group2}

2. Use the Group Attribute Value and Mapping of Group to Roles settings to describe the structure of the SAML group attribute from which AppDynamics needs to extract the group value, and the roles associated with those values. The Controller can extract Group Attribute values based on the following options. The following sections provide more information on each option: Singular Group Values Multiple Nested Group Values Singular Delimited Group Value Regex on Singular Group Value 3. With any options selected, select the Value is in LDAP Format checkbox if the value or values returned by the group attribute value is in LDAP format. For example: "OU=AppDynamics-Users". With this option enabled, only "AppDynamics-Users" is used to map to the SAML Group name.

Copyright © AppDynamics 2012-2017 Page 163 The following sections describe the SAML group attribute value mapping options.

Singular Group Values

Choose Singular Group Value if the SAML group attribute contains a single group, as in the following example.

Admin

For this example, AppDynamics would extract the value Admin and associate the user with a SAML Group with the same name. In the following sample configuration, the user would get the roles configured assigned to the Admin SAML group in the example in the following figure—Account Administrator, Analytics Administrator, and so on:

Multiple Nested Group Values

With this option selected, AppDynamics expects multiple AttributeValues child elements under the SAML Attribute with the group information, as in the following example:

_Admin_ _DBManager_

AppDynamics would extract _Admin_ and _DBManager_ from the example. Given the following sample configuration, the user with the previous response would receive the roles from the _Admin_ and _DBManager_ groups.

Copyright © AppDynamics 2012-2017 Page 164 Singular Delimited Group Value

With this option selected, AppDynamics expects a single AttributeValues element but with multiple, delimiter-separated values, as in the following example:

Admin;DB-Manager

Specify the delimiter that separate the values to extract, such as a semi-colon in the example.

Given the following sample configuration, the user would get the AppDynamics roles associated with both the Admin and DB-Manager groups—Dashboard Viewer, User, DB Monitoring Administrator, and so on.

Copyright © AppDynamics 2012-2017 Page 165 Regular Expression on Singular Group Value

Choose this option to have AppDynamics extract group mapping values using a regular expression. Regular expressions enable you to pull group values from unstructured contexts, such as from within a larger string. The following response provides an example:

User memberships in _Admin_ and _DBManager_ groups.

In the example response, the group names _Admin_ and _DBManager_ are embedded in the AttributeValue string. To extract those names, you can use a regular expression such as _[a-zA-Z]_. Like other types of group attribute sources, AppDynamics assigns all roles associated with both the _Admin_ and _DBManager_ SAML Groups, as in the following configuration:

Configure SAML for OneLogin

On this page:

Configure AppDynamics SAML Settings for OneLogin Configure OneLogin Settings for AppDynamics

This topic describes how to configure SAML-based single sign-on (SSO) authentication for Controller access with a particular identity provider, OneLogin. See Configure Authentication Using SAML for general information about SAML integration.

Configure AppDynamics SAML Settings for OneLogin

1. As an administrator or account owner in the Controller UI, access the Authentication Provider tab. See Configure Authentication Using SAML for information on accessing the tab. 2. Select SAML as the provider. 3. In the Login URL field, enter the SAML Login URL from your OneLogin configuration. The SAML

Copyright © AppDynamics 2012-2017 Page 166 3.

Login URL is the URL to the SSO service at the identify provider. The identity provider provides this URL to the Controller. If you do not know your SAML Login URL, you can locate it in your OneLogin configuration: a. Log in to your OneLogin account. b. Click the Apps tab in the first set of tabs. c. Click edit next to the application for which you want to view the Login URL. Click the Company Apps tab in the second set of tabs if it is not already selected. d. Click Single Sign-on in the third set of tabs. The SAML Login URL is the HTTP SAML Endpoint in the Sign-on method section.

4. In the Logout URL field in the AppDynamics form, enter the URL to which the browser should redirect when the user logs out. This field is optional. It's used to redirect a user who logs out to an identity provider URL instead of to the AppDynamics login screen. For example, using the following logout URL would redirect the user to the OneLogin application dashboard: https://app. onelogin.com/client/apps 5. In the Certificate field in the AppDynamics form, paste the x.509 certificate from your OneLogin configuration between the BEGIN CERTIFICATE and END CERTIFICATE delimiters. Do not copy the BEGIN CERTIFICATE and END CERTIFICATE from the OneLogin x.509 certificate field. To find your x.509 certificate in your OneLogin configuration: a. Log in to your OneLogin account. b. Click the Security tab in the first set of tabs. c. Click SAML in the second set of tabs.

Copyright © AppDynamics 2012-2017 Page 167 5.

c.

6. In the Default Roles section in the AppDynamics form, select the roles to grant to new users of the SAML-enabled controller by checking the Member check box for the role. You can select multiple roles in the list. See Roles and Permissions for information about roles and permissions. The roles that you assign here will be granted to new users when they first log in to the SAML-enabled controller if those users have not been previously created directly in the Controller. Users created prior to SAML enablement retain their original roles. You must grant at least one default role. 7. Click Save.

Configure OneLogin Settings for AppDynamics

In your OneLogin account, configure the SAML Consumer URL with the host, port and optional account name values from the AppDynamics Controller. The Consumer URL is where the identify provider posts the SAML Authentication Assertion.

1. Log in to your OneLogin account. 2. Click the Apps tab in the first set of tabs. 3. Click edit next to the AppDynamics Connector. Click the Company Apps tab in the second set of tabs if it is not already selected. 4. In the third set of tabs click Configuration. 5. Enter the Consumer URL for the AppDynamics connector. It has the format: http[s]://:/controller/saml-auth The host and port for your Controller account are supplied by AppDynamics.

6. Provide the AppDynamics account name if your controller is configured in multi-tenant mode and

Copyright © AppDynamics 2012-2017 Page 168 6.

if the user normally enters an account name on login. If your controller is configured in single-tenant mode or if the user does not supply an account name on login, you can leave the Account Name field blank. See Controller Tenant Mode and Accounts for information about controller tenant modes. 7. Save your settings.

Configure SAML for Okta

On this page:

Configure AppDynamics SAML Settings for Okta Configure Okta Settings for AppDynamics

This topic describes how to configure SAML-based single sign-on (SSO) authentication for Controller access with a particular identity provider, Okta. See Configure Authentication Using SAML for general information about SAML integration.

Configure AppDynamics SAML Settings for Okta

1. As an administrator or account owner in the Controller UI, access the Authentication Provider tab in the Administration settings. See Configure Authentication Using SAML for information. 2. Select SAML as the provider. 3. In the Login URL field, enter the SAML Login URL from your Okta configuration. The SAML Login URL is the URL to the SSO service at the identify provider. The identity provider provides this URL to the Controller.

If you do not know your SAML Login URL, you can locate it in your Okta configuration: a. Log in to your Okta account. b. In the Applications tab, select your application.

c.

Copyright © AppDynamics 2012-2017 Page 169 3.

b.

c. Click View Setup Instructions. d. The Identity Provider Single Sign-On URL setting in the Okta configuration is the URL to use for the Login URL in the AppDynamics SAML configuration. 4. In the Logout URL field in the AppDynamics form, enter the URL to which the browser should redirect when the user logs out. This field is optional. It is used to redirect a user who logs out to an identity provider URL instead of to the AppDynamics login screen. You might want to redirect to the Okta login url. 5. In the Certificate field in the AppDynamics form, paste the x.509 certificate from your Okta configuration between the BEGIN CERTIFICATE and END CERTIFICATE delimiters. Do not copy the BEGIN CERTIFICATE and END CERTIFICATE from the SAML x.509 certificate field. To find your x.509 certificate in your Okta configuration, follow the Setup Instructions referenced above in Step 3. and scroll down to the x.509 Certificate. 6. In the Default Roles section in the AppDynamics form, select the roles to grant to new users of the SAML-enabled controller by checking the Member check box for the role. You can select multiple roles in the list. See Roles and Permissions for information about roles and permissions. The roles that you assign here will be granted to new users when they first log in to the SAML-enabled controller if those users have not been previously created directly in the Controller. Users created prior to SAML enablement retain their original roles. You must grant at least one default role. 7. Click Save.

Configure Okta Settings for AppDynamics

In your Okta account, configure the SAML SSO for AppDynamics.

1. Log in to your Okta account. 2. Click the Applications tab. 3. Click Add Applications. 4. Click Create New App. 5. Select SAML 2.0 application. 6. Click Create. 7. Click General and use the wizard to configure these settings. Leave the rest at their default values.

Setting Value Description

Single Sign On URL https://:/controller/saml-auth?accountName= The location where the SAML assertion is sent with an HTTP POST

Recipient URL https://:/controller/saml-auth?accountName= URL of the assertion consumer; use Single Sign On URL

Destination URL https://:/controller/saml-auth?accountName= URL where SAML response and assertion is consumed; use Single Sign On URL

Copyright © AppDynamics 2012-2017 Page 170 7.

Audience Restriction https://:/controller/saml-auth?accountName= The intended audience of the SAML assertion

Default Relay State https://:/controller/ A URL in AppDynamics where the user is redirected after successful login

Response Signed

Assertion Signature Signed

authnContextClassRef PasswordProtectedTransport

Request Compression Uncompressed

For URL values, replace and with the address and primary listening port for the Controller. For a SaaS Controller, the URL would be in the form https://.saas.appdynamics.com.

Configure SAML for Microsoft Active Directory on Azure

On this page:

Configure Active Directory on Windows Azure for AppDynamics Configure AppDynamics SAML Settings for Active Directory on Azure

Related pages:

Configure Authentication Using SAML.

You can configure Microsoft Active Directory on Windows Azure as an SAML authentication provider for the AppDynamics Controller.

Configure Active Directory on Windows Azure for AppDynamics

1. In the Windows Azure portal, click Active Directory in the left navigation menu. 2. Click your Active Directory. 3. Under the EXPLORE menu, click Add an application. 4. Click Add an application my organization is developing. 5. In the APPLICATION GALLERY window, click CUSTOM. 6. Enter AppDynamics Controller for the name and click the check icon. 7. Click Configure single sign-on. 8. Click Windows Azure AD Single Sign-On. 9. On the Configure App Settings window, enter the following: SIGN ON URL: https://{appdynamics_controller_url}:{port}/controller/saml-auth ISSUER URL: https://{appdynamics_controller_url}:{port}/controller REPLY URL: https://{appdynamics_controller_url}:{port}/controller 10. On the Configure single sign-on at AppDynamics Controller window, download the certificate. You'll need this to configure SAML on the Controller. 11. Copy the SINGLE SIGN-ON SERVICE URL. You will need this this for the Login URL in the Controller.

Copyright © AppDynamics 2012-2017 Page 171 Configure AppDynamics SAML Settings for Active Directory on Azure

Configure SAML settings in the Controller according to Configure Authentication Using SAML:

Use a text editor to open the certificate you downloaded from the Windows Azure portal. Copy the contents of the certificate and paste it in the SAML Configuration Certificate field. Set the Login URL to the SINGLE SIGN-ON SERVICE URL. For example: https://login.windows.net/123ab4cd-5e67-8f90-gh1i -kl2m34no5678/saml2>/saml2 Configure SAML for Microsoft Active Directory Federation Services

On this page:

Requirements Configure Active Directory Federation Services for AppDynamics Configure the Time Skew for Active Directory Federation Services Configure AppDynamics SAML Settings for Active Directory Federation Services

Related pages:

Configure Authentication Using SAML.

You can configure Microsoft Active Directory Federation Services as an SAML authentication provider for the AppDynamics Controller.

Requirements

Active Directory Federation Services version 2.0 or 2.1.

Configure Active Directory Federation Services for AppDynamics

In the Active Directory Federation Services management tool, configure a Relying Party Trust for the AppDynamics Controller:

Export the token-signing certificate as a base-64 encoded file. You'll need this to configure SAML on the Controller.

Under Services > Claim Descriptions, add a new Claim Description and set both the Display name and Claim type to "Groups". Create a new Relying Party Trust: On the Identifiers tab, set the Relying party identifier to the Controller

Copyright © AppDynamics 2012-2017 Page 172 URL: https://{appdynamics_controller_url}:{port}/controller.

On the Endpoints tab, create the following: SAML Assertion Consumer endpoint: https://{appdynamics_controller_url}:{port}/controller/saml-auth. SAML Logout endpoint: Set URL to https://{adfs server url}/adfs/ls/?wa=wsignout1.0 . Leave Response URL blank.

Copyright © AppDynamics 2012-2017 Page 173 For multi-tenant customers, create a claim rule for the relying party to pass the AppDynamics account name. For example:

=> issue(Type = "accountName", Value = "MyAccount", Properties["http://schemas.xmlsoap.org/ws/2005/05/identity /claimproperties/attributename"] = "urn:oasis:names:tc:SAML:2.0:attrname-format:basic");

Optionally, create claim rules for the relying party to map Active Directory groups to roles in the AppDynamics Controller. The claim rule type is "Send Group Membership as a Claim."

Make sure role names in the Controller match the Active Directory group names exactly. The Controller automatically maps incoming SAML groups to matching roles.

Configure the Time Skew for Active Directory Federation Services

If the system time for the Active Directory server and the Controller machine do not align, you can configure the time skew for Active Directory.

To set the time skew, run the following command in PowerShell:

Set-ADFSRelyingPartyTrust -TargetName AppDynamics -NotBeforeSkew

For example, run the following command to set the time skew to 3 minutes:

Copyright © AppDynamics 2012-2017 Page 174 Set-ADFSRelyingPartyTrust -TargetName AppDynamics -NotBeforeSkew 3

Configure AppDynamics SAML Settings for Active Directory Federation Services

Configure SAML settings in the Controller according to Configure Authentication Using SAML:

Set the Login URL to https://adfs.example.com/adfs/ls/. Set the Logout URL to https://adfs.example.com/adfs/ls/?wa=wsignout1.0. Use a text editor to open the certificate you exported from the Active Directory Federation Services management tool. Copy the contents of the certificate and paste it in the SAML Configuration Certificate field.

Disable SAML Authentication for an Account

You can disable SAML authentication for an account from the AppDynamics administration console. To disable SAML Authentication

1. Log into the Administration Console. 2. In the left navigation panel, click Accounts. 3. From the accounts list, select the account for which you want to disable SAML authentication. 4. Click the Edit icon. The account settings for the account are displayed. If SAML is enabled, the setting appears as follows.

5. Click Disable.

Start or Stop the Controller

On this page:

Start and Stop the Controller Start and Stop the Controller App Server Start and Stop the Controller Database Start and Stop the Events Service Additional Commands

The scripts to start and stop the Controller and other AppDynamics platform processes are located in the /bin directory. Using the scripts, you can start the processes that comprise the platform individually or all at once.

Starting or stopping individually allows you to turn off data collection (by shutting down just the app server, for instance) while performing database operations. The startController script starts both the database and app server with one command. Similarly, the stopController scripts turn off both the Glassfish application server and the database.

To avoid the possibility of data corruption errors, be sure to stop the application server and database gracefully (that is, by using the stop scripts described on this page) before shutting off or rebooting the machine on which the Controller is running.

Start and Stop the Controller

On Linux

Copyright © AppDynamics 2012-2017 Page 175 To start the Controller on Linux, run this script from the Controller home:

bin/startController.sh

To stop the Controller on Linux:

bin/stopController.sh

On Windows

For Windows, note that the Controller processes are automatically installed as a service. To start the services, run this script from the Controller home:

bin\startControllerSvcs.bat

To stop the Controller services:

bin\stopControllerSvcs.bat

If you have uninstalled the Controller as a Windows service for any reason, use these scripts to stop and start the Controller: startController.bat and stopController.bat from an elevated command prompt (run as administrator).

Never stop the Controller or its individual services (the AppDynamics Application service or AppDynamics Database services) from the task manager. To stop and restart the AppDynamics processes, open an elevated command prompt (run as administrator) and run the scripts for starting and stopping the service.

Start and Stop the Controller App Server

On Linux

Start the app server by running this script from the Controller home:

bin/controller.sh start-appserver

Stop the app server using this command:

bin/controller.sh stop-appserver

On Windows

Run the following command in an elevated command prompt from the Controller home:

bin\controller.bat start-appserver

Stop the app server using this command:

Copyright © AppDynamics 2012-2017 Page 176 bin\controller.bat stop-appserver

Start and Stop the Controller Database

On Linux

Start the database by running this command from the Controller home:

bin/controller.sh start-db

Stop the database by running this command:

bin/controller.sh stop-db

On Windows

Stop the database by running the following command in an elevated command prompt:

bin\controller.bat start-db

Stop the database by running this command:

bin\controller.bat stop-db

Start and Stop the Events Service

The events service is an internal data storage engine used by the Database Monitoring module.

The events service is off by default. To use Database Monitoring, you need to start the events service.

On Linux

Start the events service on Linux by running this command from the Controller home:

bin/controller.sh start-events-service

Stop the events service by running this command:

bin/controller.sh stop-events-service

On Windows

Start the events service on Windows by running this command from the Controller home:

Copyright © AppDynamics 2012-2017 Page 177 bin\controller.bat start-events-service

Stop the events service by running this command:

bin\controller.bat stop-events-service

Additional Commands

If you run the controller script with no arguments, the command prints all options it accepts. These include commands, for example, for starting and stopping the events service or reporting service, logging in to the database, and stopping HTTP listeners.

In general, you should use these additional operations only as specifically instructed by the AppDynamics documentation or by AppDynamics support.

Controller Data and Backups

On this page:

About Controller Data Storage Controller Data Directory Location Manage the Database User Password Moving the Controller Data Directory

This topic provides an overview of how to configure and administer Controller data storage.

About Controller Data Storage

The Controller requires persistent data storage to store the following type of information:

Design of your applications (all metadata about business transactions, tiers, policies, and so on) History of the performance of your applications (metric data) Transaction snapshot data and events History of incidents that occurred (both resolved and unresolved incidents)

Controller Data Directory Location

By default, the AppDynamics Controller uses MySQL as its storage mechanism. The Controller bundles a MySQL instance with the Controller. At installation time, the installer creates the necessary tables and artifacts in the database.

By default, the database files and data are stored in: /db.

Manage the Database User Password

The installer creates the user account that the Controller uses to log into the database to perform database-related operations. The username of the account is "root", and the password is the one you supply to the installer during the installation process.

When attempting to access the data, the Controller reads the database user password from these sources and in the priority shown:

From the MYSQL_ROOT_PASSWD environment variable From user input to a command line prompt

Copyright © AppDynamics 2012-2017 Page 178 If you do not keep the password in the environment variable, you will need to supply it in response to a command line prompt whenever performing an operation that involves accessing the database, such as starting the Controller (since this requires starting the database), starting the database, stopping the database, or logging into the database.

New in 4.2.1.7 The database root user password is no longer kept on disk in the .rootpw file.

Moving the Controller Data Directory

After installation you can move the data directory to a new location. The following conditions can make it necessary to move the data:

When you want to store the Controller data on a SAN in order to get higher I/O performance and redundancy. If there is not enough disk space available during Controller installation.

If you are using symlinks, you must create the symlink outside of the root Controller install directory and move the data directory to the new volume after you install the Controller.

Warning: Do not mount a file system on /db/data. During Controller upgrade, the installer moves the data directory to data_orig. Upgrade will fail if the installer cannot complete this move.

To relocate the Controller data directory

1. Stop the Controller and its database. See Start or Stop the Controller. 2. Modify following properties in the /db/db.cnf file to point to the new location of the data directory.

datadir tmpdir log slow_query_log_file

3. Copy (or move) the existing data directory /db to the new location. For example, to copy the data on Linux:

cd /db/ cp data

4. Start the Controller. See Start or Stop the Controller. 5. Check the database.log and server.log for any errors related to the database connection.

Controller Data Backup and Restore

On this page:

Best Practices for Backups Backup Tools Back Up the Controller with mysqldump Import Controller Data with mysql Sample Data Backup Script Backup the Platform Administration Application with mysqldump Using a Backup to Migrate to a New Physical Server

AppDynamics strongly recommends that you perform routine data backups of the Controller and Platform Administration Application.

One method of maintaining backups of the Controller is to implement high availability. With high availability, the database on the secondary Controller keeps a replicated copy of the data on the primary Controller. A secondary Controller also makes it practical to take cold copies of the Controller data, since you can shut down the secondary to copy its data without affecting Controller

Copyright © AppDynamics 2012-2017 Page 179 availability. For information on HA, see Controller High Availability (HA).

Other approaches include using a disk snapshot mechanism, such as Linux Volume Manager, or using database backup tools. The Bac kupTools section describes tools that support each approach. In addition to regular backups, back up the Controller and Platform Administration Application before upgrading or migrating them from one server to another.

This page provides an overview of the tasks and considerations related to backing up the Controller and Platform Administration Application.

Best Practices for Backups

Backing up the entire system each night may not be feasible when dealing with the large amount of data typically generated by a Controller deployment. To balance the risk of data loss against the costs of performing backups, a typical backup strategy calls for backing up the system at different scopes at different times. That is, you may choose to perform partial backups more frequently and full backups less frequently.

The scope of a Controller backup can be categorized into these levels:

Level 1: A light backup of the installation environment only Level 2: A metadata backup involving all metadata associated with the installation except big data tables. Level 3: Backs up all data, either by performing a cold backup of the /data directory or a hot backup using a third-party tool.

A possible backup strategy may be to perform a level 1 and level 2 backup very frequently, say nightly, and a level 1 and level 3 backup about once a week. In addition to performing a level 1 or 2 backup, you should also backup the data for the Platform Administration Application with mysqldump on a regular basis. You do not need to backup the Platform Administration Application when you perform a level 3 backup since it also backs up the Platform Administration Application data.

Light Backup (Level 1)

A light backup targets Controller configuration files like db.cnf and domain.xml. This type of backup lets you avoid having to reconfigure the Controller in case of machine failure.

To perform this type of backup, simply copy everything in the Controller installation directory EXCEPT the data directory.

While it is recommended that you copy the entire Controller home except the data directory when performing a light backup, particularly before performing a Controller upgrade, there are scenarios in which you may wish to copy only site-specific configuration files. This may be the case if you are migrating an existing Controller configuration to a new Controller installation, for example. For a list of those files, see Migrate the Controller.

Metadata Backup (Level 2)

A metadata backup exports the data that encapsulates the environment monitored by the Controller. Metadata defines the applications monitored by the Controller, business transactions, policies, and so on. It does not include what can be thought of as "runtime data", the big data tables that contain the metrics, snapshots, events, and top summary stats (top SQL, top URLs, and so on) generated in the monitored environment. By backing up metadata, you can avoid having to reconfigure monitored applications in the Controller in the event of a failure.

To perform this type of backup, run the script described in Using mysqldump to back up the Controller.

Complete Backup (Level 3)

A complete backup saves all runtime data associated with the Controller installation. It captures the actual metrics data, snapshots, and so on.

The AppDynamics Controller uses the default storage engine for MySQL, InnoDB. Since InnoDB supports transactions (in contrast to MyISAM), you cannot simply copy data files directly while the Controller is running. You need to stop the Controller before copying data files. However, some third-party backup tools, such as Percona XtraBackup, do not rely on transactions so you can perform a hot backup of your system (that is, back up the Controller database while it is running).

You can perform a complete backup as either:

A cold backup of the /data directory. A cold backup means that, with the Controller app server and database shut down, you create an extra copy of the Controller database using, for example, the "cp -r" command, the tar utility, rsync, or others. A hot backup, which means the Controller is running. For a hot backup, use a third-party tool such as Xtrabackup or LVM snapshotting.

This type of backup allows you to skip the Level 2 backup and the Platform Administration Application backup. You still need to perform the Level 1 backup to back up the Controller home directory.

Copyright © AppDynamics 2012-2017 Page 180 Backup Tools

This section lists a few third-party tools that you can use to back up Controller data. The list is not exhaustive; you can use any tool capable of backing up MySQL data with the Controller. However, the tool should back up the data as binary data.

For Linux systems:

Percona XtraBackup InnoDB Hot Backup

For Windows systems:

Zmanda Recovery Manager for MySQL

An alternative to using a database backup tool is to use a disk snapshot tool to replicate the disk or partition on which the Controller data resides. Options include:

ZFS volume manager. For more information, see Using ZFS methods for data backup. The Linux Logical Volume Manager (LVM) or a tool based on LVM such as mylvmbackup provide a similar capability.

Details for performing this type of backup are beyond the scope of this documentation. For more information, refer to administration documentation applicable to your specific operating system.

Back Up the Controller with mysqldump

The mysqldump utility is a MySQL backup tool that is included with the Controller instance of MySQL.

While mysqldump is not recommended for use on large data tables, such as the Controller metric data tables, it is useful for backing up Controller metadata. Metadata defines the monitored domain for the Controller, including applications, business transactions, alert configurations, and so on.

The following instructions assume that the binary path for the Controller's MySQL instance is in the PATH variable. The path to the Controller's instance of MySQL must precede any other MySQL path on your system. This prevents conflicts with other database management systems on your machine, such as a MySQL instance included by default with Linux.

The database binary files for the Controller database are in /db/bin.

To use mysqldump, run the mysqldump executable, passing the root username, password, and output file. The executable is located in the following directory:

/db/bin

The command should be in the form:

mysqldump -u root -p > /tmp/metadata_dump.sql

For a full example that shows which tables to exclude for a metadata backup, see the contents of the metadata backup script described in the next section.

Sample mysqldump Script

The following script illustrates how to use mysqldump to export Controller metadata while excluding runtime data tables by script.

Linux: ControllerMetadataBackup.sh.txt Windows: ControllerMetadataBackup.cmd.txt

To use the script:

1. Download the version appropriate for your operating system. 2. Rename the file to remove the .txt extension. 3. Modify the contents of the file as described in the script comments.

Copyright © AppDynamics 2012-2017 Page 181 Import Controller Data with mysql

When you restore or migrate the Controller, you can import the data you exported with mysqldump.

Use the following command to import the data into a database:

$install/db/bin/mysql -u controller -p controller < metadata_dump.sql

Sample Data Backup Script

The following script uses Percona XtraBackup to back up Controller data. To use it, you need the percona-xtrabackup or xtrabackup and qpress packages. For information on installing XtraBackup, see the Percona installation documentation.

To use the script, download the following file:

appdynamics-backup.sh.txt

Rename the script (by removing the .txt extension). In the script:

Verify or edit the values of the CONTROLLER_HOME and DESTINATION variables at the beginning of the script for your environment. Edit the if/then/else clause at the end of the script if you want to implement backup file rotation, call your enterprise backup system to pick up the compressed Controller database image, or send an alert if the backup fails for any reason.

The following commands demonstrate how to restore a compressed backup image:

mkdir /path/to/big/staging/folder

# unpack the compressed backup archive cd /path/to/big/staging/folder && xbstream -xv < /path/to/backups/dir/controller-yyyymmdd.xbstream

# decompress the backup image and apply the log taken during backup CONTROLLER_HOME=/path/to/AppDynamics/Controller && cd /path/to/big/staging/folder \ && innobackupex --decompress --parallel=16 . && innobackupex \ --defaults-file $CONTROLLER_HOME/db/db.cnf --use-memory=1GB --apply-log --parallel=16 .

# Move a prepared backup into an empty controller data directory CONTROLLER_HOME=/path/to/AppDynamics/Controller && cd /path/to/big/staging/folder \ && innobackupex --defaults-file $CONTROLLER_HOME/db/db.cnf --move-back .

For more information on these options, see the Percona innobackupex option reference.

Backup the Platform Administration Application with mysqldump

You back up the data for the Platform Administration Application separately from the Controller data.

Run the following command to backup data for the Platform Administration Application:

Copyright © AppDynamics 2012-2017 Page 182 mysqldump -u root -p platform_admin > /tmp/platform_admin_dump.sql

The command creates an export file for the Platform Administration Application in the /tmp directory.

You can import the data into a database with the following command:

$install/db/bin/mysql -u controller -p platform_admin < platform_admin_dump.sql

Using a Backup to Migrate to a New Physical Server

You can use either a hot or cold backup procedure to migrate Controller data to a new server. However, we recommend performing cold backups. While a hot backup does not bring down the Controller for an extended amount of time, it does introduce the possibility of data loss, since hot backups capture the state of the data only when the hot backup starts.

To perform a cold backup, simply shut down the Controller and back up the data directory located in /db.

Controller Disk Space and the Database

On this page:

Disk Space Considerations Managing Disk Space

This topic discusses best practices for managing disk space for the MySQL database used by the Controller.

Disk Space Considerations

To ensure database integrity, the Controller automatically shuts itself down when available disk space falls below 1 GB.

Before it reaches that point, the Controller displays a low disk space alert in the UI and writes an error level event to server.log. The point at which the Controller generates the alert depends on its profile, as follows:

For large and extra large profiles: 10 GB or less For all other profiles: 2 GB or less

The Controller shuts itself down when there is less than 1 GB on the disk regardless of the Controller profile type.

It's important to note that the Controller monitors the disk or partition that it is installed on. If the Controller data resides on a different disk or partition from the Controller home directory, you will need to monitor available space on that disk or partition separately.

Managing Disk Space

If the disk space is low, you need to reduce the size of the Controller database.

To manage how much disk space the Controller database uses, you can change the amount of data retained in the Controller database. See Database Size and Data Retention.

Database Size and Data Retention

On this page:

Information Stored by the On-Premises Controller Before Proceeding Modifying Event, Transaction Snapshot, and Incidents Data Retention

Copyright © AppDynamics 2012-2017 Page 183 Settings Modifying Metric Data Retention Periods Limiting the Business Transaction Registration Retention Period Troubleshooting Controller Database Growth Issues

This topic explains the default data retention periods for data stored by the AppDynamics On-Premises Controller.

Information Stored by the On-Premises Controller

The Controller stores a variety of information in its database, including configuration data, event data and transaction snapshots, metric data, and incident data (such as policy violations and very slow transactions). The amount of data stored by the database, and accordingly, the amount of disk space it consumes, is controlled by data retention settings.

You can tune the retention settings in the Administration UI. In particular, you may need to reduce retention settings to reduce the amount of data stored by the Controller. As a good starting point, lowering the snapshot retention (snapshots.retention.period) and hourly metric retention (metrics.retention.period) settings can significantly reduce disk consumption. In general, those settings are a good place to start when tuning your system to reduce disk space consumption.

Before Proceeding

It is important to understand that increasing the retention periods for the settings described on this page can significantly affect Controller performance and disk consumption. To understand the impact of changing a retention period, consider the example of cutting the minute metric retention period from the default of four hours to two. This would cut in half the amount of disk space required to keep metrics at the one minute level of granularity. On the other hand, doubling the default from four to eight would double the amount of disk space consumed by that type of metric.

The AppDynamics administration UI imposes allowed values on the input fields for the metric retention settings. However, the allowed values are based on theoretical limits and do not take into account the actual capacity of the current machine. For a given deployment, the practical limits for the settings depend on the hardware resources available to your Controller, the amount of application activity, the features you are using, and other factors.

The Controller machine sizing guidelines assume that default values are used for retention settings. Increasing the default values therefore can affect the assumption on which those guidelines are based. Increasing the retention settings, even within the allowed values permitted by the UI, could have adverse affects on performance or result in disk space exhaustion. AppDynamics strongly recommends that you monitor your system carefully if you change any of these settings from the default, and be prepared to roll back your changes if necessary.

Note that the practical limits for a SaaS Controller are different from those shown here. For the data retention settings applicable to your SaaS Controller, refer to the terms of your account.

Modifying Event, Transaction Snapshot, and Incidents Data Retention Settings

You can only change these settings if your Controller is on-premises or you have a dedicated SaaS Controller.

You can modify the default retention periods for events, snapshot and incidents data.

Before following these steps to modify retention settings, see Before Proceeding.

To change the retention period for event, snapshot, and incident data

1. Log in to the Controller administration console using the root account password. (See Access the Administration Console.)

http://:/controller/admin.jsp

Use the root user password to access the Admin console when the Controller is installed in single- or multi-tenant mode. This password is set at installation time. 2. Click Controller Settings. 3. Change the data retention period for incident and event data and transaction snapshots are determined by the following

Copyright © AppDynamics 2012-2017 Page 184 3. parameters:

Property Name About the property Default

events.retention.period The retention time for events in hours. 336 hours (2 weeks)

snapshots.retention.period Determines the retention time for transaction snapshot data in hours. 336 hours (2 weeks)

incidents.retention.period Determines the data retention time for incidents, including policy violations or 336 hours (2 very slow transactions, in hours. weeks)

After you lower one of the data retention period settings, data in the database that falls outside the scope of the new data retention period is discarded. As a result, the size of the database should drop anywhere from 30 to 60 minutes after you make the change. If it does not, see Troubleshooting Controller Database Growth Issues.

Modifying Metric Data Retention Periods

You can only change these settings if your Controller is on-premises or you have a dedicated SaaS Controller.

You can modify the default metric data retention periods. The metric retention periods control how long data is retained at 1-minute, 10-minute, and 1-hour resolution (see Metric Data Resolution Over Time).

There are a few important points to note before changing metric data retention periods:

Setting longer retention periods for metric data can quickly and significantly affect database size and Controller performance. If you need to review details for an issue that occurred during a period for which you have only 10-minute or 1-hour data, AppDynamics provides access to diagnostic data at a more detailed level than might be visible on a graph, so increasing the default retention periods is not often necessary. While metric retention periods apply to the data kept in the Controller database, some of the data that appears in the Controller UI — in particular, data at the 1 and 10 minute granularity level that appears in the tier and application dashboards — is actually drawn from the Controller cache rather than from the database. If you set the 1 and 10 minute metric retention periods to an overall retention period that exceeds the cache retention period, the performance of the Controller UI will be adversely affected, since the UI will have to retrieve the data from the database rather than from the cache. The cache retention period is determined by the caches.retention.period setting, which you can modify in the Administration Console, as described below. Changing the data retention periods in the Controller Settings do not affect the values shown as time ranges in the Controller UI.

See Before Proceeding for more information about increasing data retention settings.

To change the data retention period for metrics

1. Log in to the Controller Administration Console. 2. Click Controller Settings. 3. Change the data retention period by modifying the values for the following settings and saving. The settings are:

Property name About the property Default

metrics.min.retention.period The number of hours 1-minute data will be retained in hours. This value should 4 hours always be less than the value for metrics.ten.min.retention.period.

metrics.ten.min.retention.period The number of hours 10-minute data will be retained in hours. This value 48 should always be greater than the value for metrics.min.retention.period and hours less than metrics.retention.period.

metrics.retention.period The number of days 1-hour data will be retained in days. This value should 365 day always be greater than the value for metrics.ten.min.retention.period. s

For example, if you change the metrics.min.retention.period property to 3, metric data displayed for all time ranges less than or equal to 3 hours are shown at one-minute resolution. Metric data for time ranges greater than 3 hours and less than metrics.ten.min.retention are shown at 10-minute resolution, and metric data for older time ranges are shown at 1-hour resolution.

As another example, if you change metrics.ten.min.retention to 72, all time ranges less than or equal to 72 hours (3 days), and greater or equal to min.retention.period are displayed at 10-minute resolution.

Copyright © AppDynamics 2012-2017 Page 185 Limiting the Business Transaction Registration Retention Period

You can only change these settings if your Controller is on-premises or you have a dedicated SaaS Controller.

The business transaction retention period determines how long the Controller retains a "stale business transaction" as a registered business transaction. A stale business transaction is one that has not received a request in a given period of time.

By default, the business transaction registration is retained forever. You can specify a set timeout period for the business transaction if needed. If it has not seen a request in the period of time you configure, the business transaction registration is discarded, making the business registration slot available for another business transaction.

To change the business transaction retention period, change the business.transaction.retention.period Controller setting in the Controller Administration Console. The default value is 0, which means that the business transaction registration is retained forever. The minimum retention time is 24 hours.

Troubleshooting Controller Database Growth Issues

If changing the data retention settings does not improve the rate of database growth for your system, first make sure you have restarted the Controller since making the configuration changes.

If working with support on troubleshooting, along with a description of the problem, provide a listing of your data directory. To generate the listing, run the following command on the Controller machine:

ls /db/data/controller/ -lS > controller.output

The command writes the output to the controller.output file, which you can attach to your support ticket.

Controller High Availability

On this page:

Overview of High Availability Operating Considerations Connecting Agents to Controllers in an HA Scenario

A High Availability (HA) Controller deployment helps you minimize the disruption caused by server or network failure, administrative downtime, or other interruptions. A HA deployment is made up of two Controllers, one in the role of the primary and the other as the secondary.

The High Availability (HA) Toolkit automates many of the configuration and administration tasks associated with a highly available deployment.

Even if you cannot use the toolkit directly (due to specific requirements or operating system compatibility) the toolkit can provide a model for your own tools and procedures. Essentially, to set up high availability for Controllers, you are configuring master-master replication between the MySQL instances on the primary and secondary Controllers.

An important operational point to note is that while the databases for both Controllers should be running, both Controller application servers should never be active (i.e., running and accessible by network) at the same time. Similarly, the traffic distribution policy you configure at the load balancer for the Controller pair should only send traffic to one of the Controllers at a time (i.e., do not use round-robin or similar routing distribution policy at the load balancer).

Overview of High Availability

Deploying Controllers in an HA arrangement provides significant benefits. It allows you to minimize the downtime in the event of a server failure and take the primary Controller down for maintenance with minimal disruption. It fulfills requirements for backing up the Controller data, since the secondary maintains an updated copy of the Controller data. The secondary can also be used to perform certain resource-intensive operations that are not advised to be performed on a live Controller, such as performing a cold backup of the data or accessing the database to perform long-running queries, say for troubleshooting or custom reporting purposes.

Copyright © AppDynamics 2012-2017 Page 186 In HA mode, each Controller has its own MySQL database with a full set of the data generated by the Controller. The primary Controller has the master MySQL database, which replicates data to the secondary Controller's slave MySQL database. HA mode uses a MySQL Master-Master replication type of configuration. The individual machines in the Controller HA pair need to have an equivalent amount of disk space.

The following figure shows the deployment of an HA pair at a high level. In this scenario, the agents connect to the primary Controller through a proxy load balancer. The Controllers must be equivalent versions. The App Servers and Controllers may be distributed in different data centers for redundancy.

In the diagram, the MySQL instances are connected via a dedicated link for purposes of data replication. This is an optional but recommended measure for high volume environments. It should be a high capacity link and ideally a direct connection, without an intervening reverse proxy or firewall. See Load Balancer Requirements and Considerations on Using the High Availability (HA) Toolkit f or more information on the deployment environment.

The HA toolkit provides a set of scripts that automate many of the tasks associated with high availability for Linux environments. Even if you are unable to use the toolkit (due to operating system incompatibility or site-specific requirements), the toolkit can provide a model for your own processes.

Operating Considerations

In a high availability deployment, it is important that only one Controller is the active Controller at one time. Only the database processes should be running on the secondary to that it can maintain a replicated copy of the primary database.

The Controller app server process on the HA secondary can remain off until needed. Having two active primary Controllers is likely to lead to data inconsistency between the HA pair.

When a fail over occurs, the secondary app server must be started or restarted (if it is already running, which clears the cache).

Connecting Agents to Controllers in an HA Scenario

Under normal conditions, the App Agents and Machine Agents communicate with the primary Controller. If the primary Controller becomes unavailable, the agents need to communicate with the secondary Controller instead.

AppDynamics recommends that traffic routing be handled by a reverse proxy between the agents and Controllers, as shown in the figure above. This removes the necessity of changing agent configurations in the event of a failover, or the delay imposed by using DNS mechanisms to switch the traffic at the agent.

If using a proxy, set the value of the Controller host connection in the agent configuration to the virtual IP or virtual hostname for the Controller at the proxy, as in the following example of the setting for the Java Agent in the controller-info.xml file:

Copyright © AppDynamics 2012-2017 Page 187 controller.company.com

For the .NET Agent, set the Controller high availability attribute to true in the config.xml. See .NET Agent Configuration Properties.

If you set up automation for the routing rules at the proxy, the proxy can monitor the Controller at the following address:

http://:/controller/rest/serverstatus

An active node returns an HTTP 200 response to GET requests to this URL.

For more information, see Deploy with a Reverse Proxy.

Using the High Availability (HA) Toolkit

On this page:

About the HA Toolkit Before Starting User Privilege Escalation Requirements Load Balancer Requirements and Considerations Setting Up the Controller High Availability Pair Starting the Controller Availability Watchdog Bouncing the Primary Controller without Triggering Failover Starting and Stopping the Controller Installing as a Service Performing a Manual Failover and Failback Reviving a Controller database Backing Up Controller Data in an HA Pair Updating the Configuration in an HA Pair Troubleshooting HA

The AppDynamics HA Toolkit helps you set up and administer a high availability (HA) deployment of AppDynamics Controllers. This topic describes how to use the toolkit to manage Controllers as a high availability pair.

About the HA Toolkit

The HA toolkit consists of bash scripts that automate HA-related set up and administration tasks for the Linux operating system. It works with most flavors of Linux, including Ubuntu and Red Hat/CentOS.

You can use the toolkit to:

Configure Controllers in a high availability pair arrangement. Install a watchdog process on the secondary that monitors the primary Controller and automatically fails over to the secondary when needed. Install the Controller as a Linux service. Fail over to a secondary Controller manually (for example, when when you need to perform maintenance on the primary). Revive a Controller (restore a Controller as an HA secondary after its database is more than seven days behind the master as a replication slave).

Deploying Controllers as an HA pair ensures that service downtime in the event of a Controller machine failure is minimized. It also facilitates other administrative tasks, such as backing up data. For more background information, including the benefits of HA, see Cont roller High Availability (HA).

The toolkit works on Linux systems only. However, even if you cannot use the HA toolkit directly (due to a different operating system or because of site-specific requirements), you are likely to benefit from learning how the toolkit works, since it can provide a model for your own scripts and processes.

Copyright © AppDynamics 2012-2017 Page 188 You can download the HA toolkit from the following GitHub link: https://github.com/Appdynamics/HA-toolkit/releases.

Before Starting

In an HA deployment, the HA toolkit on one Controller host needs to be able to interact with the other Controller host. The toolkit relies on certain conditions in the environment to support this interaction, along with the other operations it performs.

General guidelines and requirements for the environment in which you deploy HA are:

Two dedicated machines running Linux. The Linux operating systems can be Fedora-based Linux distributions (such as Red Hat or CentOS) or Debian-based Linux distributions (such as Ubuntu). One of the machines should have the Controller installed. The second machine should not have a Controller installed, as the HA Toolkit replicates the Controller from the primary machine to the secondary. In a Controller HA pair, a load balancer should route traffic from Controller clients (Controller UI users and App Agents) to the active Controller. Before starting, make sure that a load balancer is available in your environment and that you have the virtual IP for the Controller pair as presented by the load balancer. Port number 3388 should be open between the machines in an HA pair. The login shell must be bash (/bin/bash). The host machines should have identical directory structures. A network link connecting the HA hosts can support a high volume of data. The primary and secondary may be in separate data centers, but a dedicated network link between the hosts is recommended. SSH keys on each host that allows SSH and rsync operations by the AppDynamics user (instructions below). The hosts file (/etc/hosts) on both Controller machines should contain entries to support reverse lookups for the other node in the HA pair (instructions below). Because Controller licenses are bound to the network MAC address of the host machine, the HA secondary Controller will need an additional HA license. You should request a secondary license for HA purposes in advance. The location of the Controller home directories needs to be writable by the user under which the Controller runs. Super user access on the machine to install system services on the host machine, either as root or as a user in the sudoers group. See the next section for considerations and additional requirements for running as a non-root (but sudo-capable) user.

User Privilege Escalation Requirements

If the AppDynamics Controller is run by a non-root user on the system, the HA toolkit process must be able to escalate its privilege level to accomplish certain tasks, including replication, failover and assassin tasks.

The script uses one of the following two mechanisms to accomplish privilege escalation. The toolkit installation adds artifacts required for both mechanisms; however, the one that it actually uses is determined at runtime based on dependencies in the environment, as follows:

/etc/sudoers.d/appdynamics contains entries to allow the AppDynamics user to access the /sbin/service utility using sudo without a password. This mechanism is not available if the AppDynamics user is authenticated by LDAP. /sbin/appdservice is a setuid root program distributed in source form in HA/appdservice.c. It is written explicitly to support auditing by security audit systems. The install-init.sh script compiles and installs the program. It is executable only by the AppDynamics user and the root user. The script requires a C compiler to be available on the system. You can install a C compiler using the package manager for your operating system. For example, on Yum-based Linux distributions, you can use the following command to install the GNU Compiler, which includes a C compiler:

sudo yum install gcc

After deploying the HA Controller pair, you will be able to test support for one of these functions (without causing system changes) by running these commands as the appd user:

/sbin/appdservice appdcontroller status sudo /sbin/service appdcontroller status

At least one of the commands must successfully return the Controller status for the toolkit to work.

Load Balancer Requirements and Considerations

Before setting up HA, a reverse proxy or load balancer needs to be available and configured to route traffic to the active Controller in the HA pair. Using a load balancer to route traffic between Controllers (rather than other approaches, such as DNS manipulation)

Copyright © AppDynamics 2012-2017 Page 189 ensures that a failover can occur quickly, without, for example, delays due to DNS caching on the agent machines.

An HA deployment requires the following IP addresses:

One for the virtual IP of the Controller pair as presented by the load balancer used by clients, such as App Agents, to access the Controller. (Callout

in the following diagram.) Two IP addresses for each Controller machine, one for the HTTP primary port interface (

) and one for the dedicated link interface between the Controllers (

) on each machine. The dedicated link is recommended but not mandatory. If the Controllers will reside within a protected, internal network behind the load balancer, you also need an additional internal virtual IP for the Controller within the internal network (

).

When configuring replication, you specify the external address at which Controller clients, such as app agents and UI users, will address the Controller at the load balancer. The Controllers themselves need to be able to reach this address as well. If the Controller will reside within a protected network relative to the load balancer, preventing them from reaching this address, there needs to be an internal VIP on the protected side that proxies the active Controller from within the network. This is specified using the -i parameter.

The load balancer can check the availability of the Controller at the following address: http://:/controller/rest/serverstatus

If the Controller is active, it responds to a GET request at this URL with an HTTP 200 response. The body of the response indicates the status of the Controller in the following manner:

... true ...

Copyright © AppDynamics 2012-2017 Page 190 Ensure that the load balancer policy you configure for the Controller pair can send traffic to only a single Controller in the pair at a time (i.e., do not use round-robin or similar routing distribution policy at the load balancer). For more information about setting up a load balancer for the Controller, see Deploy with a Reverse Proxy.

Setting Up the Controller High Availability Pair

Setting up high availability involves the following steps:

Step 1: Configure the Environment Step 2: Install the HA Toolkit Step 3: Prepare the Primary Controller Step 4: Set Up the Secondary Controller and Initiate Replication

Step 1: Configure the Environment

The following sections provide more information on how to configure a few of the system requirements. They describe how to configure the settings on Red Hat Linux for a sample deployment. Note that the specific steps for configuring these requirements may differ on different systems. Consult documentation for your system for details on that system.

Host Reverse Lookups

Reliable symmetrical reverse host lookup needs to be set up on each machine. The best way to accomplish this is by placing the host names of the pair into the hosts files (/etc/hosts) on each machine. This is preferable over other approaches, namely using reverse DNS, which adds a point of failure.

To enable reverse host lookups, on each host:

1. In /etc/nsswitch.conf, put "files" before "dns" to have the hosts file entries take precedence over DNS. For example: hosts: files dns 2. In /etc/hosts file, add an entry for each host in the HA pair, as in the following example: 192.168.144.128 host1 192.168.144.137 host2

Set up the SSH key

SSH must be installed on both hosts in a way that gives the user who runs the Controller passwordless SSH access to the other Controller system in the HA pair. You can accomplish this by generating a key pair on each node, and placing the public key of the other Controller into the authorized keys (authorized_keys) file on each Controller.

The following steps illustrate how to perform this configuration. The instructions assume an AppDynamics user named appduser, and the Controller hostnames are node1, the active primary, and node2, the secondary. Adjust the instructions for your particular environment. Also note that you may not need to perform every steps (for example, you may already have the .ssh directory and don't need to create a new one).

Although not shown here, some of the steps may prompt you for a password.

On the primary (node1):

1. Change to the AppDynamics user, appduser in our example:

su - appduser

2. Create a directory for SSH artifacts (if it doesn't already exist) and set permissions on the directory, as follows:

mkdir -p .ssh chmod 700 .ssh

3. Generate the RSA-formatted key:

Copyright © AppDynamics 2012-2017 Page 191 3.

ssh-keygen -t rsa -N "" -f .ssh/id_rsa

4. Secure copy the key to the other Controller:

scp .ssh/id_rsa.pub node2:/tmp

On the secondary (node2):

1. As you did for node1, run these commands:

su - appduser mkdir -p .ssh chmod 700 .ssh ssh-keygen -t rsa -N "" -f .ssh/id_rsa scp .ssh/id_rsa.pub node1:/tmp

2. Add the public key of node1 that you previously copied to the secondary Controller host's authorized keys and set permissions on the authorized keys file:

cat /tmp/id_rsa.pub >> .ssh/authorized_keys chmod 700 ~/.ssh/authorized_keys

On the primary (node1) again:

1. Move the secondary's public key to the authorized keys

cat /tmp/id_rsa.pub >> ~/.ssh/authorized_keys chmod 700 ~/.ssh/authorized_keys

To test the configuration, try this:

ssh -oNumberOfPasswordPrompts=0 "echo success"

Make sure the echo command succeeds.

Step 2: Install the HA Toolkit

With your environment configured, you can get and install the HA Toolkit. The toolkit is packaged as a shar (shared archive) file, which, when executed, extracts the set of scripts of the toolkit.

On the primary:

1. Change to the Controller home directory:

Copyright © AppDynamics 2012-2017 Page 192 1.

cd //AppDynamics/Controller

2. Create the HA directory in the Controller home:

mkdir -p HA

3. Make the entire directory writeable and cd into the directory:

chmod +w HA cd HA

4. Download the HA toolkit distribution file to the HA directory. 5. Make the file executable:

chmod 775 HA.shar

6. Run the shell archive script:

./HA.shar

Step 3: Prepare the Primary Controller

Once you have set up the environment and downloaded the toolkit to the primary Controller, you can set up the primary Controller for high availability. The steps for setting up HA differ if you are deploying a Controller for the first time or adding an HA secondary to an existing Controller, i.e., one that has already accumulated application data:

To deploy two new Controllers as an HA pair, you first need to install the Controller on the primary machine. When installing, install the Controller as a standalone Controller (choose Not HA Enabled when prompted to select the high availability mode). For instructions on installing, see Install the Controller. To add an HA secondary to an existing standalone Controller deployment, you only need to verify that the user who runs the Controller has write access to the Controller home, as described below.

Once installation is finished, ensure that the Controller home and data directories are writable by the AppDynamics user.

Use the ls command to verify write privileges. The output should look similar to the output below, which shows the current privileges for the sample AppDynamics user, appduser.

ls -lad /opt/AppDynamics/Controller drwxr-xr-x. 18 appduser users 4096 Jan 26 18:18 /opt/AppDynamics/Controller

After preparing the primary, you can set up replication, as described next.

Step 4: Set Up the Secondary Controller and Initiate Replication

To set up the secondary Controller, you run the replicate.sh script on the primary Controller machine. This script is the primary entry point for the toolkit. Is performs these functions, among others:

Copyright © AppDynamics 2012-2017 Page 193 Deploys the secondary Controller Replicates data to the secondary Configures master-slave data replication between the primary and secondary databases Optionally, starts the watchdog process on the secondary that watches the primary and initiates a failover if needed.

This script can only be run on the primary Controller. If you run the replicate script with super user (sudo) privileges, it performs the complete HA setup—from installing the secondary Controller, copying data to the secondary, and setting up master-slave database replication. If you do not run the script as a super user, you will need to perform some additional configuration tasks later to install system services. To perform those tasks, run the Install-init.sh script as described in Installing as a Service.

You will need to run replicate script at least twice. On the first pass, the script performs initial setup and replication tasks. The final replication pass (which is specified using the -f flag) completes replication and restarts the Controller. This results in a brief service downtime.

For an existing Controller deployment with a large amount of data to replicate, you may wish to execute replication multiple times before finalizing replication. The first time you run the script, it can take a significant amount of time to complete, possibly days. The second pass replicates the data changes accumulated while the first pass executed. If performed immediately after the first pass, the subsequent pass should take considerably less time. You can run the replication pass again until the amount of time it takes for the replicate script to complete fits within what would be an acceptable downtime window.

To set up secondary replication:

1. For the initial replication steps, invoke the replicate script, passing the hostname of the secondary and virtual IP for the Controller pair at the load balancer when invoking the replicate script. The command should be in the following form:

./replicate.sh -s -e -i

Command options are: -s – The hostname of the secondary, as you have configured in /etc/hosts -e – The hostname and port for the Controller pair as exposed to clients at the load balancer or other type of reverse proxy. This is the address at which the App Agents and the Controller UI clients would typically address the Controller. For example: http://controllervip.example.com:80. -i – If your Controllers reside within a protected network (relative to the load balancer) and cannot address the virtual IP directly, use this value to indicate an internal virtual IP address for the Controllers to use. For example: http://controllervip.internal.example.com:80.

For all available options to the script, run the replicate script with no arguments.

If running as non-root, the command asks that you run the install-init script manually as root to complete the installation.

2. If replicating a large amount of data, repeat the previous command to minimize the time required for the finalization pass in which the Controller is restarted. 3. When ready to finalize replication, run the script again, this time passing the -w and -f flags:

./replicate.sh -s -f -w -e -i

The flags have the following effect

-w – Starts the watchdog process on the secondary (see below for more information about the watchdog)

-f – Causes the script to finalize setup, restarting the Controllers.

4. When finished, the databases on both hosts will be up and replicating, with the primary Controller available for receiving application data and servicing user access. Verify on the secondary Controller that data has been replicated correctly, as follows: a. At the command line on the secondary controller, go to the /bin directory. b. Log in to the secondary Controller database:

Copyright © AppDynamics 2012-2017 Page 194 4.

b.

controller.sh login-db

c. Execute following command:

SHOW SLAVE STATUS\G

This step should provide you following result:

Seconds_Behind_Master: $Number_Of_Seconds_Behind_Master

If you get a non-zero number for this test, wait until the number becomes zero. 5. Check the status of the Controller using the commands described in User Privilege Escalation Requirements.

Starting the Controller Availability Watchdog

The watchdog process is a background process that runs on the secondary. It monitors the availability of the primary Controller, and, if it detects that the primary is unavailable, automatically initiates a failover. If you passed the -w flag to the replicate script when setting up HA, the watchdog was started for you. You can start the watchdog manually and configure its default settings as described here.

The watchdog sets a health check every 10 seconds. You can configure how long the watchdog waits after detecting the primary is down before it considers it a failure and initiates failover. By default, it waits 5 minutes, rechecking for availability every 10 seconds. Since the watchdog should not take over for a primary while it is in the process of shutting down or starting up, there are individual wait times for these operations as well.

To enable the watchdog:

1. Copy watchdog.settings.template to a file named watchdog.settings. 2. Edit watchdog.settings and configure the limits settings in the file. In effect, these limits define the thresholds which, when exceeded, trigger failover. The settings are: a. DOWNLIMIT: Length of time that the primary is detected as unavailable before the watchdog on the secondary initiates failover. b. FALLINGLIMIT: Length of time that the primary reports itself as shutting down before the watchdog on the secondary initiates failover. The secondary needs to allow the primary to shut down without initiating failover, so this setting specifies the length of time after which the primary may be considered "stuck" in that state, at which point the secondary takes over. c. RISINGLIMIT: Length of time that the primary reports itself as starting up before the watchdog on the secondary initiates failover. The secondary needs to allow the primary to start up without initiating failover, so this setting specifies the length of time after which the primary may be considered "stuck" in that state, at which point the secondary takes over. d. DBDOWNLIMIT: Length of time that the primary database is detected as unavailable before the watchdog on the secondary initiates failover. e. PINGLIMIT: Length of time ping attempts of the primary fail before the secondary initiates failover. 3. Create the control file that enables the watchdog. For example:

touch /HA/WATCHDOG_ENABLE

4. Enable read/write permissions on the file:

chmod 777 /HA/WATCHDOG_ENABLE

5.

Copyright © AppDynamics 2012-2017 Page 195 5. Start the service:

service appdcontroller start

Note: Running the replicate.sh script with the -w option at final activation creates the watchdog control file automatically.

Removing the WATCHDOG_ENABLE file causes the watchdog to exit.

The Controllers are now configured for high availability. The following sections describe how to perform various administrative tasks in a high availability environment.

Bouncing the Primary Controller without Triggering Failover

To stop and start the primary Controller without initiating failover, remove the watchdog file on the secondary before stopping or initiating the restart on the primary. This causes the secondary to stop watching the primary, so that it doesn't initiate failover when the primary is briefly unavailable.

When the primary is finished restarting, you can add the file back to resume the watchdog process. The file is:

/HA/WATCHDOG_ENABLE

Starting and Stopping the Controller

After you have set up HA, the Controller is automatically started at boot time and shut down when the system is halted. You can start and stop the Controller service and HA facility manually at any time using the Linux service command as root user.

To start or stop the Controller manually, use the following commands:

To start:

service appdcontroller start

To stop:

service appdcontroller stop

Installing as a Service

The replicate script installs the Controller as a service for you automatically if you run the script as a root user. If you did not run the replicate script as root, after the replicate script process is finished, you can run the following script manually to complete the installation:

1. Change directories into /HA. 2. Run install-init.sh with one of the following options to choose how to elevate the user privilege:

-c #use setuid c wrapper -s #use sudo -p #use prune wrapper -x #use user privilege wrapper Optionally, you can also specify the following option: -a . This option configures the machine agent to report to the Controller's self-monitoring account and install an init script for it.

If you need to uninstall the service later later, use the uninstall-init.sh script.

Once installed as a service, the Linux service utility can be run on either node to report the current state of the replication, background processes, and the Controller itself.

Copyright © AppDynamics 2012-2017 Page 196 To check its status, use this command:

service appdcontroller status

The toolkit also writes status and progress logs of its various components to the logs.

Performing a Manual Failover and Failback

To fail over from the primary to the secondary manually, run the failover.sh script on the secondary. This kills the watchdog process, starts the app server on the secondary, and makes the database on the secondary the replication master. If you have custom procedures you want to add to the failover process (such as updating a dynamic DNS service or notifying a load balancer or proxy), you can add or call it from this script.

Should the secondary be unable to reach the MySQL database on the primary, it will then try to kill the app server process on the primary Controller, avoiding the possibility of having two Controllers active at the same time. This function is performed by the assassi n.sh script, which continues to watch for the former primary Controller process to ensure that there aren't two active, replicating Controllers.

The process for performing a failback to the old primary is the same as failing over to the secondary. Simply run failover.sh on the machine of the Controller to restore it as the primary. Note that if it has been down for more than seven days, you need to revive the database, as described in the following section.

Reviving a Controller database

The Controller databases can be synchronized using the replicate script if they have been out of sync for more than seven days. Synchronizing a database that is more than seven days behind a master is considered reviving a database. Reviving a database involves essentially the same procedure as adding a new secondary Controller to an existing production Controller, as described in Set Up the Secondary Controller and Initiate Replication.

In short, you run the replicate.sh without the -f switch multiple times on the primary. Once you have an opportunity for a service window and reduced the replication time to an acceptable amount of time for a service window, take the primary Controller out of service (stop the app server) and allow data synchronization to catch up.

Backing Up Controller Data in an HA Pair

An HA deployment makes backing up Controller data relatively straightforward, since the secondary Controller offers a complete set of production data on which you can perform a cold backup without disrupting the primary Controller service.

After setting up HA, perform a back up by stopping the appdcontroller service on the secondary and performing a file-level copy of the AppDynamics home directory (i.e., a cold backup). When finished, simply restart the service. The secondary will then catch up its data to the primary.

Updating the Configuration in an HA Pair

When you run the replicate script, the toolkit copies any file-level configuration customizations made on the primary Controller to the backup, such as configuration changes in domain.xml file.

Over time, you may need to make modifications to the Controller configuration. After you do so, you can use the -j switch to replicate configuration changes only from the primary to the secondary. For example:

./replicate.sh -j

Troubleshooting HA

The HA toolkit writes log messages to the log files located in the same directory as other Controller logs, /logs by default. The files include:

Copyright © AppDynamics 2012-2017 Page 197 replicate.log: On the primary machine, this log contains events related to replication and HA setup. watchdog.log: On the secondary Controller host, this log contains event information generated by the watchdog process. assassin.log: On the secondary machine (or a machine on which the watchdog process has attempted to terminate the remote Controller process) information generated for the attempt to terminate the remote Controller process typically due to a failover event. failover.log: Failover event information.

Platform Version Information

On this page:

Controller Version Bundled Glassfish Server Version Bundled MySQL Database Version Bundled Java Version Configuring the Controller to Use Java 1.8

This topic describes how to check the Controller version and the version of bundled components. This information is useful when troubleshooting the system or performing other administrative tasks.

Controller Version

In the AppDynamics UI, you can see the version of the Controller from the About AppDynamics dialog box accessible under the Help menu.

From the command line of the Controller machine, you can get the version number from the README.txt file located in the Controller home directory.

Bundled Glassfish Server Version

The Glassfish server is installed in /appserver.

The Glassfish server version is Glassfish 3.1.2.2.

Important: AppDynamics maintains and updates the bundled components as part of the standard Controller upgrade process. Do not attempt to upgrade a bundled component independently of the Controller upgrade procedure.

Bundled MySQL Database Version

The AppDynamics Controller uses MySQL as its default database, where it stores configuration data, metrics data, transaction snapshot data and events, and the history of incidents that occurred (both resolved and unresolved incidents are stored). The MySQL database files are installed in /db by default.

The latest AppDynamics release bundles MySQL version 5.5.52.

Important: AppDynamics maintains and updates the bundled components as part of the standard Controller upgrade process. Do not attempt to upgrade a bundled component independently of the Controller upgrade procedure.

Bundled Java Version

The Controller bundles and uses Java 1.7 by default. It include Java 1.8 as an optional Java platform. (See the following section.)

Configuring the Controller to Use Java 1.8

While the Controller uses Java 1.7 runtime environment by default, it bundles the 1.8 JRE as well, which you can optionally switch to.

Copyright © AppDynamics 2012-2017 Page 198 Switching to Java 1.8 requires a restart of the Controller, as described in the following steps.

From a terminal window on the Controller machine, follow these steps to switch the Controller:

1. Navigate to the the Controller home directory. 2. Use the controller command script to change the JRE, as follows. On Linux:

bin/controller.sh set-controller-java-1.8

On Windows

bin\controller.bat set-controller-java-1.8

3. Restart the Controller: On Linux:

bin/controller.sh stop-appserver

On Windows

bin\controller.bat stop-appserver

To revert the change, run the command to set the JRE to 1.7, as follows:

./controller.sh set-controller-java-1.7

When finished, restart the Controller app server.

Configure the Email Server

On this page:

Configure the SMTP server Troubleshoot notifications

The SMTP email server must be configured to enable email and SMS notifications and digests to be sent by the Controller.

Configure the SMTP server

1. While logged in as an administrator, click Alert & Respond from the top navigation menu bar and then Email/ SMS Configuration. 2. Provide the connection information for the SMTP host and port. A SaaS Controller should be preconfigured with the appropriate settings, but verify the settings as the following: a. SMTP Host: localhost b. SMTP Port: 25

No authentication is needed. For an on-premise controller, use a host and port settings for an SMTP server available in the controller deployment environment. 3.

Copyright © AppDynamics 2012-2017 Page 199 3. Customize the sent from address in notifications emails in the From Address field. By default, emails are sent by the root Controller user. 4. If the SMTP host requires authentication, configure the credentials in the Authentication settings. 5. If you want to add any text to the beginning of the notification, enter it in the Notification Header Text field. 6. If you are using SMS do one of the following: Select Default and choose one of the available carriers from the pulldown menu. Select Custom and enter the phone number receving the message as @.

For example, a mobile phone in the United States serviced by AT&T might be:

[email protected]

A mobile phone in the United Kingdom serviced by Textlocal might be:

[email protected]

See SMS gateway by country for information on most common SMS gateways. 7. Test the configuration by sending an email. 8. Save the settings.

Troubleshoot notifications

If you do not receive notifications for health rule violations, it could be because the default SMTP server timeout period is too short. To troubleshoot, increase the value of the mail.smtp.socketiotimeout Controller Setting in the Administration Console. The default value is 30 seconds.

Access the Administration Console

On this page:

Access the Controller Admin Console Access the Glassfish Admin Console

Related pages:

Controller Settings for Standalone Machine Agents

To configure global Controller administration settings, you use the AppDynamics Administration Console. The AppDynamics Administration Console should not be confused with the administration console for the underlying GlassFish application server or with the administration page accessible in the general AppDynamics UI. The AppDynamics Administration Console lets you configure certain global settings such as metric retention periods, UI notification triggers, tenancy mode, and accounts in multi-tenancy mode.

AppDynamics recommends that you do not change Controller settings in the console except as guided by an AppDynamics representative or as specifically directed by documentation.

Access the Controller Admin Console

1. If you are logged into a Controller UI session with an account other than the root user, log out or open a new browser window in private (incognito) mode. If you do not, you will get an "Access Denied" error when you attempt to open the console page. 2. In the browser enter the URL of the Administration Console:

http://:/controller/admin.jsp

Copyright © AppDynamics 2012-2017 Page 200 2.

The console is served on the same port as the Controller UI, port 8090 by default. 3. Log into the system account with the root user password. The root user is a built-in global administrator for the Controller. Use the password you set for this user when installing. For more information on root and other types of administrative users, see Ad ministrative Users. Note that the root user password is different from a normal AppDynamics account password. It is not the same as the account owner or account administrator password. If you are logged into the Controller using your current account, you need to log out of that account and then back in as the root user to access the admin console.

For information on changing the Controller root user password, see Administrative Users.

Access the Glassfish Admin Console

In rare cases, you may need to log in to the administration console for application server underlying Glassfish server. For example, you may need to add an HTTP listener or modify Glassfish logging settings. The admin console provides a browser interface for performing many of the same tasks you can perform using the asadmin command-line utility. You can access the console for the Glassfish server using the built-in user account named admin.

For security reasons, access to the Glassfish browser interface is limited to local machine access by default, so the following steps should be performed from a browser on the Controller machine. Attempts to access the console remotely trigger the error message "Secure Admin must be enabled to access the DAS remotely."

To access the Glassfish administration console:

1. If you do not know the password for the admin user, get it from a file in the Controller home directory named .passwordfile. The password is the value of the AS_ADMIN_PASSWORD variable. 2. From a web browser on the Controller machine, open the following URL:

http://localhost:4848

Note that port 4848 is the default port number for the Glassfish administration console, but it may have been set to another value at installation time. If the default port doesn't work and you are unsure of what port number to use, you can check the port configured for the network-listener element named admin-listener in the domain.xml file. 3. Log in as user admin, with the password you retrieved in step 1.

For information on changing the Glassfish admin user password, see Administrative Users.

Modify GlassFish JVM Options

On this page:

Command Format Description Restart Examples

AppDynamics provides a utility for modifying JVM options for the Controller. The modifyJvmOptions utility is used to modify the JVM options in the Controller's domain.xml file. Changes made with the utility are retained across Controller upgrades.

To run the utility, invoke the modifyJvmOptions from the /bin subdirectory of the Controller home directory:

On Linux use modifyJvmOptions.sh On Windows, run modifyJvmOptions.bat in an elevated command prompt

Any change to a Java option requires a Controller application server restart to take effect.

You can edit Controller settings that are not JVM options by manually editing the domain.xml file or by using the GlassFish asadmin utility.

Copyright © AppDynamics 2012-2017 Page 201 Command Format modifyJvmOptions [add | delete ] (jvm-option-name=jvm-option-value) [@jvm-option-name=jvm-option-value* ]

Separate multiple JVM options using the @ sign.

On Microsoft Windows, you will need to enclose the entire JVM option names and values string in double quotation marks. The examples illustrate this requirement.

Description

The modifyJvmOptions utility adds or deletes command-line options that are passed to the Java application launcher when GlassFish Server is started. These are in addition to the options that are preset in the GlassFish Server.

Use modifyJvmOptions to add or delete the following types of options:

Java system properties: These options are set using the -D option of the Java Application launcher Startup parameters for the Java application launcher: These options are set using the dash (-) character

The utility does not validate your settings, so make sure you use valid values. Invalid options can cause Controller startup to fail.

To add multiple options, either run the utility once for each option or use an @ character to separate the options. Use care when attempting to add multiple options at once. If you use a character separator other than @ character, it will result in a single key with the intended additional keys as part of the key value.

To update an existing value, use the modifyJvmOptions utility to add the option with the new value. If the setting already exists in the configuration, the utility removes the existing settings and adds the setting with your new value.

To delete a Java option, supply the full key name and value of the option. You cannot delete an option by key name only. For example:

modifyJvmOptions.sh delete -Dunixlocation=\root\example

To see a list of existing options in the domain configuration, use the list command, "modifyJvmOptions.sh list" or "modifyJvmOptions.bat list".

Restart

After you have finished adding or deleting options using modifyJvmOptions, stop and restart the GlassFish app server:

Linux

\bin\controller.sh stop-appserver \bin\controller.sh start-appserver

Windows

The following commands must be run in an elevated command prompt, which you can access by right-clicking on the Command Prompt icon in the Windows Start menu and choosing Run as administrator.

\bin\controller.bat stop-appserver \bin\controller.bat start-appserver

Copyright © AppDynamics 2012-2017 Page 202 Examples

Modify a startup parameter for the Java Application Launcher

The following command sets the maximum available heap size to 1024MB using modifyJvmOptions.sh:

modifyJvmOptions.sh add "-Xmx1024m"

The following command deletes the maximum available heap size parameter set in the previous command:

modifyJvmOptions.sh delete "-Xmx1024m"

Modify Multiple Startup Parameters for the Java Application Launcher

The following command sets the maximum available heap size to 1024 and requests details about garbage collection using modifyJvmOptions.bat.

To run the command, open an elevated command prompt by right-clicking on the Command Prompt icon in the Windows Start menu and choosing Run as administrator.

modifyJvmOptions.bat add "-Xmx1024m@-XX\:+PrintGCDetails"

The following command deletes the startup parameters set in the previous command

modifyJvmOptions.bat delete "-Xmx1024m@-XX\:+PrintGCDetails"

Modify Multiple Java System Properties

The following commands add and delete multiple Java system properties using modifyJvmOptions.bat. Note that the quotation marks enclosing the argument string are required. The commands must be run in an elevated command prompt, which you can access by right-clicking on the Command Prompt icon in the Windows Start menu and choosing Run as administrator.

modifyJvmOptions.bat add "-Dunixlocation=/root/example@-Dvariable=\$HOME@-Dwindowslocation=d\ :\\sun\\appserver@-Doption1=value1" modifyJvmOptions.bat delete "-Dunixlocation=/root/example@-Dvariable=\$HOME@-Dwindowslocation=d\ :\\sun\\appserver@-Doption1=value1"

The following commands add and delete multiple Java system properties using modifyJvmOptions.sh. Quotation marks enclosing the argument string are not required.

Copyright © AppDynamics 2012-2017 Page 203 modifyJvmOptions.sh add -Dunixlocation=/root/example@-Dvariable=\$HOME@-Dwindowslocation=d\: \\sun\\appserver@-Doption1=value1 modifyJvmOptions.sh delete -Dunixlocation=/root/example@-Dvariable=\$HOME@-Dwindowslocation=d\: \\sun\\appserver@-Doption1=value1

Controller Tenant Mode and Accounts

On this page:

Differences Between Single and Multi-Tenant Mode Accessing an Account in the Controller UI Switch from Single-Tenant to Multi-Tenant Mode Create Accounts in Multi-Tenant Mode Applying a License to a Multi-Tenant Controller

The Controller tenancy mode determines whether context within the Controller is divided into separate accounts. Each account is an entirely distinct realm within the Controller UI for purposes of user authentication, agent reporting, and monitoring.

The tenancy mode of an on-premise AppDynamics Controller (whether it is single-tenancy or multi-tenancy) is set at installation time. You can switch the tenancy mode from single-tenant to multi-tenant mode later, as described here.

Having a single tenancy Controller is suitable for most installations. Only very large installations or installations that have very distinct sets of users may require multi-tenancy.

Differences Between Single and Multi-Tenant Mode

In multi-tenant mode:

You can create multiple accounts (tenants) in the Controller. Each account will have its own set of users and applications. The Controller login page includes an additional field where users need to choose an account to log in to. Essentially, multi-tenant mode allows you to partition users and access to application data in a logical, secure way.

In single-tenant mode:

There is only one account (tenant) in the Controller system. All users and applications are part of this single built-in account, so all users have access to all monitored Applications in this mode. The account is not exposed to users in the Controller UI. The account field in the login page is omitted for single tenant mode. AppDynamics recommends single-tenant mode for most installations.

Accessing an Account in the Controller UI

For a SaaS Controller or if multi-tenancy is enabled for an on-premise Controller, users need to enter the account name in the Account field when logging in to the Controller UI.

Instead of having users enter the account name manually, if the account name is passed in the Controller URL as the value of the accountName query parameter, the Account field is pre-populated with the account name.

For example, for a SaaS Controller, pass the account name in the URL as follows:

Copyright © AppDynamics 2012-2017 Page 204 https://.saas.appdynamics.com/controller/#/accountName =

This form of the URL is useful in links that are shared with Controller UI users in an organization, since it relieves those users from having to know and manually enter their account names.

Switch from Single-Tenant to Multi-Tenant Mode

While switching a Controller from single-tenancy to multi-tenancy mode is supported, switching from multi-tenancy to single-tenancy is not. Do not switch to multi-tenant mode unless you are sure that is the mode in which you want this Controller to operate.

To change to multi-tenant mode, in the Administration Console, find the multitenant.controller Controller Setting and set its value to true.

Create Accounts in Multi-Tenant Mode

In multi-tenant mode, you can add accounts in the AppDynamics Administration Console. To add an account, after logging in to the administration console as the AppDynamics root user, click Accounts and then New. The Create Account page appears:

In the new account page, define the licensing entitlements that apply to the account and when done, click the Create icon.

Account-level license unit limits enable you to prevent a particular account from using more licensing units than it should. Note that the overall license limits applicable at the Controller level are independent of any specific limits you apply at the account level.

For example, say that you created an account specifically for the exploratory use of a particular group in your organization. If the total limit for the Controller is 100 Java Agents, you can ensure that the new account never interferes with the license availability of another account by setting the Java Units Provisioned value for the account to a much smaller limit, say 5. If you were to set it to 100 instead, and there are other accounts that are also set to 100, the available units for the Controller would be occupied by the first 100 agents that connect to the Controller, regardless of the accounts they report in to. Similarly, you can limit the life span of the account by setting an expiration date for the license.

You can see how many total license units are available to your Controller in the License Information page.

Copyright © AppDynamics 2012-2017 Page 205 After enabling multi-tenant mode, users need to specify the account they want to log into in the Account field in the Controller UI login screen. For details see Java Agent Configuration Properties, .NET Agent Configuration Properties, and Database Agent Configuration Properties, Standalone Machine Agent Configuration Property Reference.

Applying a License to a Multi-Tenant Controller

1. Login with root user at http://:/controller/admin.jsp 2. Click Accounts and select the account you want to apply the license to. 3. Update the License expiration date and license units based on license.lic 4. Save the changes 5. With an administrator account, check the license page to see the latest changes.

Customize System Notifications

On this page:

System Events Notification System Use Notification

Related pages:

Access the Administration Console

You can customize system events and system use notification messages from the Administration Console.

System Events Notification

Certain system events trigger event notifications in the Controller UI. You can configure which type of events appear as notifications in the UI as described here.

Configure Events that Trigger UI Notifications

1. Log in to the Administration Console. 2. Go to the Controller Settings. 3. Search for the system.notification.event.types property. The value of this property determines which type of events result in UI notifications. 4. Set the types of events you want to see by adding them to the comma separated string in the dialog box. To disable notifications of a particular type of event, remove it from the list. Do not use between commas.

Notification Event Types

Event Value What This Event Notification Means

LICENSE There is an issue with the status of your license.

DISK_SPACE There is an issue with the amount of disk space left on your system.

CONTROLLER_AGENT_VERSION_INCOMPATIBILITY A mismatch between the version of the agent and the version of the controller has been detected.

CONTROLLER_EVENT_UPLOAD_LIMIT_REACHED The limit on the number of events per minute that can be uploaded to the controller from this account has been reached. Once the limit is reached no more events — other than certain key ones — are uploaded for that minute.

Copyright © AppDynamics 2012-2017 Page 206 CONTROLLER_RSD_UPLOAD_LIMIT_REACHED The limit on the number of request segment data (RSDs) per minute that can be uploaded to the controller from this account has been reached. RSDs are related to snapshots. Once the limit is reached no more RSDs — other than certain key ones — are uploaded for that minute.

CONTROLLER_METRIC_REG_LIMIT_REACHED The limit for registering metrics for this account has been reached. No further metric registrations are accepted.

CONTROLLER_ERROR_ADD_REG_LIMIT_REACHED The limit for registering error Application Diagnostic Data (ADDs) for this account has been reached. No further error ADD registration is accepted.

CONTROLLER_ASYNC_ADD_REG_LIMIT_REACHED The limit for registering async ADDs for this account has been reached. No further async ADD registration is accepted.

AGENT_ADD_BLACKLIST_REG_LIMIT_REACHED If the Agent attempts to register an ADD above the limit, the Controller rejects the attempt and adds the ADD to a blacklist. There is a limit to the size of the blacklist. This event indicates that that limit has been reached.

AGENT_METRIC_BLACKLIST_REG_LIMIT_REACHED If the Agent attempts to register a metric above the limit, the Controller rejects the attempt and adds the metric to a blacklist. There is a limit to the size of the blacklist. This event indicates that that limit has been reached.

System Use Notification

The system use notification is an optional and configurable message that includes information on privacy and security notices. If enabled, the displayed message must be acknowledged before granting the user further access.

Configure System Use Notification

1. Log in to the Administration Console. 2. Go to the Controller Settings. 3. Search for the system.use.notification.message property. The value of this property is the message of the system use. 4. Enter your post-login message, which will be displayed every time a user logs in, informing the user of the system usage requirements. There is a 10,000 character limit. Here is an example message:

This is a U.S. Government computer system, which may be accessed and used only for authorized Government business by authorized personnel. Unauthorized access or use of this computer system may subject violators to criminal, civil, and/or administrative action. All information on this computer system may be intercepted, recorded, read, copied, and disclosed by and to authorized personnel for official purposes, including criminal investigations. Such information includes sensitive data encrypted to comply with confidentiality and privacy requirements. Access or use of this computer system by any person, whether authorized or unauthorized, constitutes consent to these terms. There is no right of privacy in this system.

Migrate the Controller

On this page:

Before Starting

Copyright © AppDynamics 2012-2017 Page 207 Migrate by Copying the Data Directory Migrating by Copying the Entire Controller Directory Controller Migration FAQ

This topic describes how to migrate a Controller from a physical or virtual machine to a new physical machine.

Before Starting

Migrating the Controller often results from the need to move the Controller to new hardware in response to increased load. Before starting, make sure that the new hardware meets the AppDynamics requirements as described in Controller System Requirements. Specifically, review the Controller hardware performance profiles and the hardware requirements per profile information to verify that the target Controller hardware meets the RAM size and Disk I/O requirements.

The old and new Controller host machines must use the same processor architecture, ia32 or x86_64, and the same operating system.

Before you migrate the Controller, back up the Controller and the Platform Administration Application database. See Controller Data Backup and Restore for more information.

You need to acquire new license files for the new Controller hardware since licenses are tied to the machine MAC addresses. Send the MAC addresses to [email protected] and request new license file or two new files if upgrading to an HA pair.

Migrate by Copying the Data Directory

The following instructions take you through the steps to install the Controller to a new machine and migrate the Controller data from an old machine. The Controllers do not need to be the same version, but the new Controller must be a later version.

The migration occurs in two phases. The first phase consists of preliminary steps that do not require a Controller shut down and can be done at any time. The second phase requires the Controller to be shut down and should be done during a service window to avoid disruption.

Prepare the New Machine

1. Install the Controller on the new machine. 2. Apply the license for the new machine. See Before Starting for more information. 3. Stop the new Controller with the following command: bin/stopController.sh. 4. Remove all files and directory from the new Controller's data directory, /db/data, except the following: mysql performance_schema

Migrate the Controller

1. In the command line, navigate to the controller/bin directory. 2. Shut down the old Controller 3. Copy all the files and directories from the MySQL data directory on the old host to the MySQL data directory on the new host except the following: ddl_log.log mysql performance_schema .pid slow.log 4. If the AppDynamics version on the new Controller is later than the old Controller, re-run the Controller installer to upgrade the database you just copied to the latest schema version. 5. Migrate the Controller agent's access key. Copy /appserver/glassfish/domains/domain1/appagent//conf/controller-info.xml from the old Controller host to the new Controller host. 6. Merge any manual customizations you have made to configuration files from the original Controller directory to the equivalent directory on the new machine. You may have made manual customizations to one or more of the following files: /db/db.cnf /appserver/glassfish/domains/domain1/config/domain.xml /appserver/glassfish/domains/domain1/config/ and any other customizations in this directory. In

Copyright © AppDynamics 2012-2017 Page 208 6.

particular, customizations to keystore.jks and cacerts.jks need to be propagated to the new Controller. /appserver/glassfish/domains/domain1/appagent/ver4.1.X.Y/conf/cacerts.jks /appserver/glassfish/domains/domain1/applications/controller/controller-web_war/WEB-INF/flex/ser vices-config.xml (if using SSL) 7. If SSL traffic terminates at the Controller instead of at another destination, such as a reverse proxy, update the default keystore with the SSL certificate you use. 8. Start the new Controller. 9. Modify your network configuration so that the traffic previously directed to your old Controller now goes to the new Controller. Depending on your data center topology, this may require modifying load balancer rules or changing DNS records so that the hostname for the old Controller machine now points to the IP address of the new Controller machine. 10. Verify that the Controller UI is reachable at the new address.

Migrating by Copying the Entire Controller Directory

Instead of migrating a data directory to a new Controller installation, as described in the previous section, it is possible to migrate a Controller between machines by copying the entire Controller directory.

However, there are two requirements to performing the migration this way:

The new machine must have the same IP address and host name as the old machine. The new machine must have the exact same directory structure as the old installation location.

To migrate by copying the Controller directory to a new machine

1. In the command line, navigate to the controller/bin directory. 2. Shut down the old Controller. Shutting down the Controller before copying its data ensures that all data will be moved to the new machine. For details see Start or Stop the Controller. 3. Copy the Controller home directory from the old machine to the new machine. Copy the directory into the exact same directory structure. a. Migrate the Controller agent's access key. Copy /appserver/glassfish/domains/domain1/appagent//conf/controller-info.xml from the old Controller host to the new Controller host. 4. If SSL traffic arrives at the Controller instead of at another destination, such as a reverse proxy, update the default keystore. 5. Start the new Controller. 6. Modify your network configuration so that the traffic previously directed to your old Controller now goes to the new Controller. Depending on your data center topology, this may require modifying load balancer rules or changing DNS records so that the hostname for the old Controller machine now points to the IP address of the new Controller machine.

IMPORTANT: Do not start the Controller on the old machine again!

Controller Migration FAQ After moving the data to the new machine, why do I get a 443, permission denied error? Can I take a hot backup and use that for the new machine? Do I need a new license if moving the Controller from a virtual to a physical environment?

After moving the data to the new machine, why do I get a 443, permission denied error?

This can happen if port 443 is bound to some other application. Change the HTTPS port, for example, from 443 to 5443.

Can I take a hot backup and use that for the new machine?

You can perform the migration with data from either a hot or cold backup, but it is important to get the backup of the data directory from /db. However, AppDynamics strongly recommends that you perform a cold backup of the data.

A hot backup will not bring the Controller down for a long time, but you will still lose the data when you migrate. This is because hot backup will only have the data from the point that the hot backup started and not when it ended.

Do I need a new license if moving the Controller from a virtual to a physical environment?

If the physical machine has the same MAC address, you can reuse the license. If that is not the case, contact your AppDynamics

Copyright © AppDynamics 2012-2017 Page 209 representative or [email protected] to request a new license.

Controller Logs

On this page:

Controller Log Files Manage Log Files Change the default location or names of the Controller logs Logging Level Granularity Change Default Logging Level by Component

The Controller generates various log files you can use to troubleshoot issues with your deployment.

Controller Log Files

install.log: Information about events of the install process such as extraction, preparation and other post-processing tasks. It is located at . server.log: Information for the embedded Glassfish application server used by the Controller. It is located at /logs. audit.log: information about Account/User/Group/Role CRUD and User login/logout operations. This can be used to forward auditable events from the AppDynamics controller into a central log management system or SIEM. It is located at /logs. database.log: Information for the MySQL database that is used by the Controller. It is located at /logs. installation.log: Information about the installation specific to Install4j. It is located at /.install4j. Also include the entire .install4j directory and the latest /i4j_log*, for example /tmp/i4j_log*. startAS.log: Output generated by the underlying Glassfish domain for the Controller.

Manage Log Files

The application server is preconfigured to rotate the server.log file regularly, based on settings in the domain configuration file.

For the other log files, database.log and startAS.log, you need to set up log rotation to prevent them from consuming excessive disk space. You also need to set up log rotation for an additional log file, /db/data/slow.log. This log contains information about slow MySQL queries.

The tool you use to perform the rotation depends on your operating system. On Linux, you can use the mysql-log-rotate script. The script is included with the Controller database installation at /db/support-files. You need to modify the script for your environment, since it is not set up to rotate the database.log file by default. On other systems, you need to create or install a script that performs log rotation and make sure that it get run regularly, for example, by cron or an equivalent task scheduler.

Change the default location or names of the Controller logs

1. In a web browser, log in to the Controller's Glassfish administration console, as described in Access the Administration Console . 2. From the left-side navigation tree, expand Configurations -> server-config and click Logger Settings. 3. Set the new location for server.log by modifying the Log File value. The default value points to the logs directory located at the root of the Controller home directory.

${com.sun.aas.instanceRoot}/../../../../logs/server.log

If you specify a directory that does not exist, it is created when you restart the application server. 4. Change the database.log location by opening the /db/db.cnf file. 5. Set the value of the log-error property to the new location of the database.log file. This directory location must exist before you restart the Controller or you will get start-up errors. 6. Change the startAS.log location by opening the /bin/controller.sh file.

7.

Copyright © AppDynamics 2012-2017 Page 210 7. Edit the log location specified in the following line to reflect the new location:

nohup ./asadmin start-domain domain1 > $INSTALL_DIR/logs/startAS.log

8. Open the /appserver/glassfish/domains/domain1/config/domain.xml file, and change the log location specified in the following: log-root attribute of the domain element. file attribute of the log-service element. tx-log-dir attribute of the transaction-service element. 9. Copy if desired any existing logs from the default directory (/logs) to the new location. 10. Restart the Controller. See Start or Stop the Controller. 11. Verify that the database.log, server.log, and startAS.log files are being written to the new location and remove the old log files.

Logging Level Granularity

The Controller logs provide information about possible errors in Controller operations. By default, the Controller writes to the log at the INFO level. When debugging your Controller deployment, you may need to increase the logging level to generate additional information.

You can set the logging level by Controller component, which include:

Agents Business Transactions (BTS) Events Incidents Information Points (IPS) Metrics Orchestration Rules Snapshots

Change Default Logging Level by Component

By default, the Controller generates logs at the INFO level. You can change the level for one or all of the components. This may be needed, for example, when you are debugging your system, and want the Controller to generate more information in the form of logs. On the other hand, you may wish to reduce logging verbosity to minimize the reduce the rate of growth of log files. The following steps describe how to change the default log levels.

1. In a web browser, log in to the Controller's Glassfish administration console, as described in Access the Administration Console . 2. From the left-side navigation tree, expand Configurations -> server-config and click Logger Settings. 3. Click the Log Levels tab. 4. Modify those components that start with "com.appdynamics". By sorting the list by name, you can quickly access the com.appdynamics components. For each component, modify the log level by choosing a new level from the Log Level menu. For example, to debug the system, we suggest setting the log level to FINE. 5. Click Save.

Controller Dump Files

On this page:

Get Heap and Histogram Dump Files Take Four Thread Dumps at Three Second Intervals Send the Files to the AppDynamics Support Team

Copyright © AppDynamics 2012-2017 Page 211 The following steps describe how to collect troubleshooting information for your Controller. You may be requested for the information when troubleshooting with the AppDynamics support team.

Get Heap and Histogram Dump Files

Get the process id of the Controller to use in subsequent commands.

ps -ef | grep java

Get the heap dump before garbage collection using following command:

/bin/jmap -dump:format=b,file=heap_before_live.bin

Get the histogram before garbage collection using following command:

/bin/jmap -histo | head -200 > histo_before_live.txt

Get the histogram before garbage collection using following command:

/bin/jmap -histo:live | head -200 > histo_after_live.txt

Take Four Thread Dumps at Three Second Intervals

Using the Controller process ID, execute following command:

kill -3

Save the /appserver/glassfish/domains/domain1/logs/jvm.log file.

Send the Files to the AppDynamics Support Team

If asked to provide the information to the AppDynamics support team, send the following files generated by these steps:

heap_before_live.bin histo_before_live.txt histo_after_live.txt jvm.log

Security

AppDynamics includes security features that help to ensure the safety and integrity of your deployment.

The Controller is installed with an HTTPS port enabled by default. SSL secures client connections and allows client to authenticate the Controller. The Controller UI supports HTTP Basic Authentication, along with SAML and LDAP authentication. Role-based access

Copyright © AppDynamics 2012-2017 Page 212 controls in the UI allow you to manage user privileges.

While the security features of the Controller are enabled out of the box, there are some steps you should take to ensure the security of your deployment. These steps include but are not limited to:

The SSL port uses a self-signed certificate. If you intend to terminate SSL connection at the Controller, you should replace the default certificate with your own, CA-signed certificate. If you replace the default SSL certificate on the Controller, you will also need to establish trust for the Controller's public key on the App Agent machine.

As an alternative to terminating SSL at the Controller, you can put the Controller behind a reverse proxy that terminates SSL, relieving the Controller from the burden of processing SSL connections.

Along with a secure listening port, the Controller provides an unsecured, HTTP listening port as well. You should disable the port or block access to the point from any untrusted networks. Make sure that your App Agents connect to the Controller or to the reverse proxy, if terminating SSL at a proxy, with SSL enabled. The Controller and underlying components, Glassfish and MySQL, include built-in user accounts. Be sure to change the passwords for the accounts regularly and in general, follow best practices for password management for the accounts. For information on changing the passwords for built-in users, see Administrative Users.

Controller SSL and Certificates

On this page:

About Controller SSL and Certificates Before Starting Create a Certificate and Generate a CSR Import an Existing Keypair into the Keystore Verify the Use of SSL Change Keystore Password Updating an Expired Certificate

The Controller comes with a preconfigured HTTPS port (port 8181 by default) that is secured by a self-signed certificate. This page describes how to replace the default certificate with your own custom certificate.

About Controller SSL and Certificates

For production use, AppDynamics strongly recommends that you replace the self-signed certificate with a certificate signed by a third-party CA or your own internal CA. If you are deploying .NET Agents, you must replace the self-signed certificate with one signed by a CA, since the .NET agents do not work with self-signed certificates.

This page describes how to replace the existing key in the default keystore. Replacing the entire keystore is not recommended, unless you first export the existing artifacts from the default keystore and import them into your own keystore.

The default Controller keystore includes the following artifacts:

glassfish-instance: A self-signed private key provided the Glassfish application server. s1as: A self-signed private key provided with the Glassfish application server used by the Controller for secure communication on port 8181. reporting-instance: A private key used by the reporting service, the service that enables scheduled reports.

You can view the contents of the keystore yourself using the keytool utility. To do so, from the /jre/bin directory, run the following command. Enter the default keystore password changeit when prompted.

keytool -list -v -keystore ../../appserver/glassfish/domains/domain1/config/keystore.jks

The exact steps to implement security typically vary depending on the security policies for the organization. For example, if your organization already has a certificate to use, such as a wildcard certificate used for your organization's domain, you can import the

Copyright © AppDynamics 2012-2017 Page 213 existing certificate into the Controller keystore. Otherwise, you'll need to generate a new one along with a certificate signing request. The following sections take you through these scenarios.

Before Starting

The following instructions describe how to configure SSL using the Java keytool utility bundled with the Controller installation. You can find the keytool utility in the following location:

/jre/bin

The steps assume that the keytool is in the operating system's path variable. To run the commands as shown, you first need to put the keytool utility in your system's path. Use the method appropriate for your operating system to add the keytool to your path.

While the directory paths in this topic use forward slashes, the instructions apply to both Linux and Windows Operating System environments. The steps note where there are differences in the use of commands between operating systems.

Create a Certificate and Generate a CSR

If you don't have a certificate to use for the Controller, create it as follows. In these steps, you generate a new certificate within the Controller's active keystore, so it has immediate effect.

The steps are intended to be used in a staging environment, and require the Controller to be shut down and restarted. Alternatively, you can generate the key as described here but in a temporary keystore rather than the Controller's active keystore. After the certificate is signed, you can import the key from the temporary keystore to the Controller's keystore.

1. At a command prompt, change directories to the following location:

/appserver/glassfish/domains/domain1/config

2. Create a backup of the keystore file. For example, on Linux, you can run:

cp keystore.jks keystore.jks.backup

On Windows, you can use the copy command in a similar manner. 3. If it's still running, stop the Controller. 4. Delete the existing certificate with the alias s1as from the keystore:

keytool -delete -alias s1as -keystore keystore.jks

5. Create a new key pair in the keystore using the following command. This command creates a key pair with a validity of 1825 days (5 years). Replace 1825 with the validity period appropriate for your environment, if desired.

keytool -genkeypair -alias s1as -keyalg RSA -keystore keystore.jks -keysize 2048 -validity 1825

Follow the onscreen instructions to configure the certificate. Note that: For the first and last name, enter the domain name where the Controller is running, for example, controller.example.com. Enter the default password for the key, changeit.

This generates a self-signed certificate in the keystore. We'll generate a signing request for the certificate next. You can now restart the Controller and continue to use it. Since it still has a temporary self-signed keystore, browsers attempting to connect to the Controller UI will get a warning to the effect that its certificate could not be verified.

Copyright © AppDynamics 2012-2017 Page 214 5.

See Change Keystore Password for information on changing the default password for the keystore and certificates.

6. Generate a certificate signing request for the certificate you created as follows:

keytool -certreq -alias s1as -keystore keystore.jks -file AppDynamics.csr

7. Submit the certificate signing request file generated by the command (AppDynamics.csr in our example command) to your Certificate Authority of choice. When it's ready, the CA will return the signed certificate and any root and intermediary certificates required for the trust chain. The response from the Certificate Authority should include any special instructions for importing the certificate, if needed. If the CA supplies the certificate in text format, just copy and paste the text into a text file. 8. Import the signed certificate:

keytool -import -trustcacerts -alias s1as -file mycert.cer -keystore keystore.jks

This command assumes the certificate is located in a file named mycert.cer. 9. If you get the error "Failed to establish chain from reply", install the issuing Certificate Authority's root and any intermediate certificates into the keystore. The root CA chain establishes the validity of the CA signature on your certificate. Although most common root CA chains are included in the bundled JVM's trust store, you may need to import additional root certificates, such as certificates belonging to a private CA. To do so:

keytool -import -alias [Any_alias] -file -keystore /appserver/glassfish/domains/domain1/config/ke ystore.jks

When done importing the certificate chain, try importing the signed certificate again.

Import an Existing Keypair into the Keystore

These steps describe how to import an existing public and private key into the Controller keystore. We'll step through this scenario assuming that the existing public and private keys need to be converted to a format compatible with Java Keystore, say from DER format to PKCS#12. You'll need to use OpenSSL to combine the public and private keys, and then keytool to import the combined keys into the Controller's keystore.

Most Linux distributions include OpenSSL. If you are using Windows or your Linux distribution does not include OpenSSL, you may find more information on the OpenSSL website.

This assumes that we have the following files:

private key: private.key signed public key: cert.crt CA root chain: ca.crt

The private key you use for the following steps must be in plain text format. Also, when performing the following procedures, do not attempt to associate a password to the private key as you convert it to PKCS12 keystore form. If you do, the following steps can be completed as described, but you will encounter an exception when starting up the Controller, with the error message: "java.security.UnrecoverableKeyException: Cannot recover key".

To import an existing keypair into the Controller keystore

1. Use OpenSSL to combine your existing private key and public key into a compatible Java keystore:

Copyright © AppDynamics 2012-2017 Page 215 1.

openssl pkcs12 -inkey private.key -in cert.crt -export -out keystore.p12

2. If the Controller is still running, stop it. 3. Change to the keystore directory:

cd /appserver/glassfish/domains/domain1/config/

4. Create a backup of the keystore file. For example, on Linux, you can run:

cp keystore.jks keystore.jks.backup

On Windows, you can use the copy command in a similar manner. 5. Delete the self-signed certificate with alias s1as from the default keystore:

keytool -delete -alias s1as -keystore keystore.jks

6. Import the PKCS #12 key into the default keystore:

keytool -importkeystore -srckeystore keystore.p12 -srcstoretype pkcs12 -destkeystore keystore.jks -deststoretype JKS

7. Update the alias name on the key pair you just imported:

keytool -changealias -alias "1" -destalias "s1as" -keystore keystore.jks

8. Change the password of the imported private key:

keytool -keypasswd -keystore keystore.jks -alias s1as -keypass <.p12_file_password> -new

For the new private key password, use the default (changeit) or the master password set as described in Change Keystore Password, if changed. 9. If you get the error "Failed to establish chain from reply", install the issuing Certificate Authority's root and any intermediate certificates into the keystore. The root CA chain establishes the validity of the CA signature on your certificate. Although most common root CA chains are included in the cacerts.jks truststore, you may need to import additional root certificates. To do so:

Copyright © AppDynamics 2012-2017 Page 216 9.

keytool -import -alias -file -keystore /appserver/glassfish/domains/domain1/config/ke ystore.jks

When done, try importing the signed certificate again. 10. Start the Controller.

Verify the Use of SSL

To make sure the configuration works, use a browser to connect to the Controller over the default secure port, port 8181:

https://:8181/controller

Make sure the Controller entry page loads in the browser correctly. Also verify that the browser indicates a secure connection. Most browsers display a lock icon next to the URL to indicate a secure connection.

After changing the certificate on the Controller, you will need to import the public key of the certificate to the agent truststore. For information on how to do this, see the topic specific for the agent type:

EUM aggregator: Troubleshoot Your EUM Setup Java Agent: Enable SSL (Java) .NET: Enable SSL (.NET)

When finished, you can turn of the default, non-secure port at 8090. To disable the port, you can use the asadmin Glassfish tool. From /appserver/glassfish/bin, run the following command:

./asadmin delete-http-listener http-listener-1

When prompted, enter admin as the user and the password for the Controller root user. On Windows, use the asadmin.bat script in a similar manner.

Change Keystore Password

The default password for the keystore used by the Controller is changeit. This is the default password for the Glassfish keystore, and is a well-known (and thus insecure) password. For a secure installation, you need to change it.

Changing the password in this manner does not affect the administration password you use to access the Glassfish administration console. See Administrative Users for information on changing this password.

To change the password you must use the Glassfish administration tool (rather than the keytool utility directly). Using the Glassfish administration tool allows the Glassfish instance to access the keys at runtime.

If you change the keystore password directly using the keytool, the Controller generates the following error message at start up:

Caused by: java.lang.IllegalStateException: Keystore was tampered with, or password was incorrect

If you encounter this scenario, change the password using the asadmin utility, as described next.

Copyright © AppDynamics 2012-2017 Page 217 To change Glassfish passwords

1. Stop the Controller. 2. Change the Glassfish master password:

/appserver/glassfish/bin/asadmin change-master-password --savemasterpassword=true

3. Restart the Controller and make sure it starts error free.

Changing the master password with asadmin changes the password for the keystore and for the s1as key. It does not change the password of any additional keys you have added to the keystore, however. If you have added keys to the keystore, you need to change their password to match the new master password. Use the keytool to change their passwords as follows:

keytool -keypasswd -alias myserver -keystore keystore.jks -storepass

Updating an Expired Certificate

The steps to renew an expired or soon-to-expire certificate are essentially the same as those for replacing the default certificate, as documented in Create a Certificate and Generate a CSR. In summary, the steps to update the expired certificate are to:

1. Back up the existing keystore. 2. Remove the expiring key:

keytool -delete -alias s1as -keystore keystore.jks

3. Generate the key pair:

keytool -genkeypair -alias s1as -keyalg RSA -keystore keystore.jks -keysize 2048 -validity 1825

4. Generate the signature request and import the signed certificate when returned from the CA.

For details, see Create a Certificate and Generate a CSR.

Configure the Security Protocol

On this page:

Enable TLS for a Controller on Windows Enable TLS for a Controller on Linux Configure the Security Protocol for Agents Enabling Stronger Encryption Keys

The Controller uses the TLSv1.2 security protocol for secure connections by default. Additionally, SSLv3 is disabled by default.

It is possible to change the security protocols used by the Controller. You may need to change the security protocol if using a Controller that is version 3.8.1 or later with agents that don't support TLSv1.2. These agents include:

Java Agent version 3.8.1 or earlier (see Agent - Controller Compatibility Matrix for complete SSL compatibility information)

Copyright © AppDynamics 2012-2017 Page 218 .NET Agent running on .NET Framework 4.5 or earlier

If upgrading the agents or .NET framework is not possible, you will need to enable TLSv1 on the Controller using the asadmin command-line utility, as described here. To use the utility, you will need to supply the password configured for the root user for the Controller.

These changes require a restart of the Controller application server, which results in a brief service downtime. You may wish to apply these change when the downtime will have the least impact.

To maintain a secure environment, APIs that are downstream of the Controller should also use TLS. If SSL3 is required, you can enable it. See the following Oracle DK 7u76 documentation: http://www.oracle.com/technetwork/java/javase/7u76-relnotes-2389087.ht ml.

Enable TLS for a Controller on Windows

1. From a command prompt, change to the Glassfish bin directory, as follows:

cd \appserver\glassfish\bin

2. Run the asadmin utility:

asadmin.bat set configs.config.server-config.network-config.protocols.protocol. http-listener-2.ssl.tls-enabled=true

3. Enter the user name "admin" when prompted. 4. For the password, use the password configured for the root user for the Controller. As an alternative to entering user credentials interactively, you can pass them at the command line, as described in the followin g instructions for Linux. 5. Restart the Controller application server as described in Start or Stop the Controller.

Enable TLS for a Controller on Linux

1. At the command line, change to the Glassfish bin directory:

cd /appserver/glassfish/bin

2. Run the asadmin utility as follow to enable SSL3. You can enter the user name and password for the administrator interactively or pass in the command, as follows:

./asadmin --user admin --passwordfile $INSTALL_HOME/.passwordfile set configs.config.server-config.network-config.protocols.protocol. http-listener-2.ssl.tls-enabled=true

3. If prompted, enter the user name "admin" and the password for the Controller root user when prompted. 4. Restart the Controller application server as described in Start or Stop the Controller.

Copyright © AppDynamics 2012-2017 Page 219 Configure the Security Protocol for Agents

On the agent side, the .NET Agent uses the seetings in the container to negotiate the SSL protocol with the Controller. Normally you do not need to configure the security protocol for the .NET Agent.

The table in Agent - Controller Compatibility Matrix lists the default security protocol for the different versions of the Java Agent. If the default security protocol for your version of an agent is incompatible with the Controller or it is incompatible with an intervening proxy, pass the -Dappdynamics.agent.ssl.protocol system property to configure one of the following security protocols:

SSL TLS TLSv1.2 TLSv1.1

java -javaagent:/javaagent.jar ... -Dappdynamics.agent.ssl.protocol ...

Enabling Stronger Encryption Keys

By default, the Controller's embedded Java runtime only supports up to 128-bit encryption key lengths for secure connections.

You can enable up to 256-bit encryption keys by downloading and installing the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files in the Controller's embedded Java runtime. Once installed, the Controller can establish connections using the stronger ciphers immediately.

To enable stronger keys in encryption keys in the Controller:

1. Download the Unlimited Strength Jurisdiction Policy Files from the following location: http://www.oracle.com/technetwork/java/javase/downloads/jce-7-download-432124.html 2. Install the policy files on the Controller by placing them in the following location: /jre/lib/security 3. Restart the Controller app server.

After restarting the Controller app server, the following cipher suites become available:

TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 TLS_RSA_WITH_AES_256_CBC_SHA256 TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384 TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384 TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 TLS_DHE_DSS_WITH_AES_256_CBC_SHA256 TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA TLS_ECDH_RSA_WITH_AES_256_CBC_SHA TLS_DHE_RSA_WITH_AES_256_CBC_SHA TLS_DHE_DSS_WITH_AES_256_CBC_SHA

Configure an SSH Key for Controller Access

On this page:

Set up SSH Key Pairs Using DSA Set up SSH Key Pairs Using RSA

Copyright © AppDynamics 2012-2017 Page 220

You can set up SSH (Secure Shell) with public/private key pairs so that you do not have to type the password each time you access a Controller machine by SSH. Setting up keys allows scripts and automation processes to access the Controller easily. You can generate DSA or, if you want stronger encryption, RSA keys.

Set up SSH Key Pairs Using DSA

1. Run the ssh command that sets up the key pair:

% ssh-keygen \-t dsa

2. At the following prompt, press Enter to accept the default key location, or type another:

Generating public/private dsa key pair.

Enter file in which to save the key (~/.ssh/id_dsa):

3. Press return at the password prompt:

Enter passphrase (empty for no passphrase):

4. Press Return again to confirm the password:

Enter same passphrase again:

You should see the following information:

Your identification has been saved in \~/.ssh/id_dsa

Your public key has been saved in \~/.ssh/id_dsa.pub

The key fingerprint is:

If SSH continues to prompt you for your password, verify your permissions in your remote .ssh directory. It should have only your own read/write/access permission (octal 700):

% chmod 700 \~/.ssh

5. Open the local ~/.ssh/id_dsa.pub file and paste its contents into the ~/.ssh/authorized_keys file on the remote host. 6. Update the permissions on the authorized_keys file on the remote host as follows:

% chmod 600 \~/.ssh/authorized_keys

Copyright © AppDynamics 2012-2017 Page 221 Set up SSH Key Pairs Using RSA

Run the ssh command that sets up the key pair:

% ssh-keygen \-t rsa

The generated files will be named id_rsa and id_rsa.pub, instead of id_dsa and id_dsa.pub.

Otherwise, the remaining steps are identical to those beginning with step 2 in the steps above.

Mutual Authentication

On this page:

Before you Begin: Important Notes and Best Practices Client Authentication Setup Set Up Server Authentication on Controller Set Up Server Authentication on Agents Set Up Client Keystore on Controller Configure Agents to Access Client Keystore on Controller Enable Client Authentication on Controller

(New in 4.2.15) In addition to implementing Server Authentication, you can also implement mutual (client and server) authentication. Client authentication enables the Controller to ensure that only authorized and verified agents can establish cannections. The following steps outline the workflow to implement mutual authentication.

Before you Begin: Important Notes and Best Practices

Note the following:

Client Authentication requires version 4.2.15 (or higher) of both the agent and the Controller. Client Authentication is supported for the following agents only: Java Agent Standalone Machine Agent. It is good practice to set up and verify client authentication on one agent at first. After you confirm that client authentication is working for that agent, you can then proceed to other agents. The following procedures use the default key and keystore password (changeit) for the keystore. Before you proceed with this workflow, It is good practice to Change this default password, as described in "Change Keystore Password" under Change Keystore Password. Use the new password when you perform these procedures.

Client Authentication Setup

The following steps outline the workflow for setting up Client-based Authentication. These steps are described in detail in the following sections.

1. Controller: Set up Server Authentication. 2. Agent: Set up Server Authentication. 3. Controller: Set up a client keystore. 4. Agent: a. Copy the client keystore from the Controller to the agent. b. Configure the agent to access the keystore. 5. Controller: Configure the application server to support Client Authentication.

Copyright © AppDynamics 2012-2017 Page 222 Set Up Server Authentication on Controller

1. Open a CLI window on the Controller host and cd to the directory: / appserver/glassfish/domains/domain1/config 2. Create a trusted root certificate store to preserve the Controller public key. The following command exports the public key (s1a s) and stores it in the new certificate store file server.cer.

/bin/keytool -export -alias s1as -file server.cer \ -keystore keystore.jks -storepass changeit

3. (Optional) To view information about the public and private keys in keystore.jks, enter the following command:

/bin/keytool -list -v -alias s1as \ -keystore ./keystore.jks \ -storepass changeit

4. Import the Controller public key from server.cer into an agent keystore. The following command creates a new keystore (ag ent-truststore.jks); the Controller sends the public key from this keystore to agents when they set up an SSL connection.

/bin/keytool -import -v -alias controller_alias -file server.cer \ -keystore agent_truststore.jks \ -storepass changeit

5. This command displays the certificate and asks if you trust it. Answer yes.

Set Up Server Authentication on Agents

For each authorized agent, do the following:

1. Copy the agent_truststore.jks file from the Controller to the root directory of the agent ( or ). 2. For each authorized agent, specify the following properties in the /conf/controller-info.xml file as follows: true 443 agent_truststore.jks changeit

Set Up Client Keystore on Controller

In this procedure, you will create a signed certificate and import it into the client keystore. The following steps use the Controller to sign the certificate, but you can also use a third-party Certificate Authority (CA).

1. (Optional) To view information about the public and private key in the Controller keystore, enter the following command:

/bin/keytool -list -v -alias s1as \ -keystore ./keystore.jks \ -storepass changeit 2. Create a keystore (clientkeystore.jks) that includes the Controller public/private keypair. In , run the following command.

/bin/keytool -genkey -alias client-alias -keyalg RSA \ -keystore clientkeystore.jks \ -storepass changeit \ -keypass changeit

The keytool prompts you for your name, organization, and other information it needs to generate the key.

3. Generate a certificate signing request (client.csr) that can be signed by a Certificate Authority (CA).

Copyright © AppDynamics 2012-2017 Page 223 3.

/bin/keytool -certreq -v -alias client-alias -file client.csr \ -keystore clientkeystore.jks \ -storepass changeit \ -keypass changeit

4. Get the request (client.csr) signed by a trusted CA. The following command uses the Controller as a CA, which creates a new file (signedClient.cer) with the Controller-signed certificate.

/bin/keytool -gencert -infile ./client.csr -outfile signedClient.cer -alias s1as \ -keystore ./keystore.jks \ -storepass changeit \ -keypass changeit

5. (Optional) To view information about the signed certificate, enter the following command:

/bin/keytool -printcert -v -file ./signedClient.cer 6. Verify that the certificate is indeed signed by the authentic Certificate Authority. There are two ways to do this: • Copy the the public key of the signing authority into the trusted root set, or • Import the public key of the signing authority into the client keystore.

The following command does the latter: it imports the Controller public key from server.cer into clientkeystore.jks.

/bin/keytool -import -v -alias controller_alias -file server.cer \ -keystore clientkeystore.jks \ -storepass changeit

This command asks if you trust the certificate; when you enter yes, it should output the message: Certificate was added to keystore [Storing clientkeystore.jks] 7. (Optional) To view the contents of clientkeystore.jks, enter the following command:

/bin/keytool -list -v -keystore clientkeystore.jks -storepass changeit

The keystore should show entries for controller-alias and client-alias (which is still unsigned). 8. Import the signed public key certificate into the client keystore. The following command imports signedClient.cer into cli entkeystore.jks.

/bin/keytool -importcert -v -alias client-alias -file ./signedClient.cer \ -keystore clientkeystore.jks \ -storepass changeit \ -keypass changeit

You now have a password-protected clientkeystore.jks file on the Controller with a signed certificate that verifies the Controller's authenticity. 9. Verify that the trusted root certificate on the Controller includes the public key of the signing authority. This procedure used the Controller as the Certificate Authority, so the public key is already included. You can verify this by running the following command. The public key of the signing authority should now be part of the agent's public key certificate. /bin/keytool -list -v -alias client-alias \ -keystore clientkeystore.jks -storepass changeit

Configure Agents to Access Client Keystore on Controller

For each authorized agent, do the following:

1. Copy the clientkeystore.jks file from the Controller to the following directory on the agent: • Java agent: /conf • Machine Agent: 2. Specify the following properties in the /conf/controller-info.xml file as follows:

true clientkeystore.jks changeit

Copyright © AppDynamics 2012-2017 Page 224 changeit

Enable Client Authentication on Controller

On the Controller, do the following:

1. Start the Controller if it is not currently running. 2. Open a CLI window and cd to /appserver/glassfish/bin. 3. Enter the following command to start the AppServer Admin tool: ./asadmin You should now see the asadmin prompt: asadmin> 4. Enter the following command: list-http-listeners This lists the available set of HTTP listeners. For example: http-listener-1 http-listener-2 admin-listener 5. Configure one of these listeners by running the following commands. In this example, we are configuring http-listener-2:

set configs.config.server-config.network-config.protocols.protocol. http-listener-2.ssl.key-store=keystore.jks set configs.config.server-config.network-config.protocols.protocol. http-listener-2.ssl.client-auth-enabled=true set configs.config.server-config.network-config.protocols.protocol. http-listener-2.ssl.trust-store=cacerts.jks

6. To verify that the properties are set correctly, run the following command. Again, this example assumes http-listener-2.

get configs.config.server-config.network-config.protocols.protocol. http-listener-2.*

7. Restart the Controller.

Cloud Services

The AppDynamics Application Intelligence Platform integrates with cloud service platforms in several ways.

Certain types of Platform as a Service (PaaS) providers, including Pivotal Cloud Foundry and Red Hat Openshift, make it easy to add AppDynamics agent monitoring to virtual machine instances that host application servers.

The cloud auto-scaling features allow you to automate operations in virtual environments based on conditions detected by AppDynamics, for instance, provisioning additional application server instances in response to load.

These topics are discussed in the following sections:

Cloud Auto-Scaling

Copyright © AppDynamics 2012-2017 Page 225 Platform as a Service Integrations

Cloud Auto-Scaling

On this page:

Workflows Task Templates Workflow Creation Prerequisites Make Cloud Auto-Scaling Visible on the Controller UI

Related pages:

Cloud Auto-Scaling Actions Install the Standalone Machine Agent

An AppDynamics workflow consists of commonly executed actions organized together into a repeatable, automated flow. Workflows are often used to implement automated tasks in virtualized, cloud-based environments. For example, you can add machines in response to increased application load, take down machine, or reconfigure machines.

Workflows can be run manually, on a repeating schedule, or as the action taken by a policy that has been triggered by a performance event, like a health rule violation. Workflows can invoke tasks on application machines, such as running an Ant task.

Workflows

AppDynamics workflows are divided into steps, based on five basic step-types, which indicate generally the type of action each is designed to accomplish:

Create new machines and configure them. Terminate machines. Configure machines that are already deployed. Configure a specific machine is a step to perform tasks on any running machines instrumented with the Standalone Machine Agent, cloud-based or not. Manual step pauses workflow execution until required manual steps are taken.

Workflows are executed by the Standalone Machine Agent, which must be available on every target machine. The Standalone Machine Agent configuration must have the Enable Orchestration property set to true. See Standalone Machine Agent Configuration Property Reference for more about this property. This property must be true in the Standalone Machine Agent included in the images used to create new cloud-based machines.

Since machines are brought up and shut down frequently in cloud environments, there are several additional configuration settings suitable for compute cloud environments. The automatic node name settings and the reuse node name setting for the Java and .NET agent ensure that a new logical node in the AppDynamics model isn't created every time a node is started.

For more information on the workflow steps, see Create a Workflow and Workflow Steps.

Task Templates

The basic work of launching and terminating a step-type is taken care of by the step-type itself, but all other workflow work is defined and executed using tasks. Tasks are sequential units of code execution, with defined inputs and outputs. AppDynamics provides templates for common tasks, such as running Ant on a supplied build file or creating a specified schema in a MySQL database. You use these templates to create your own tasks, with the inputs and outputs you need.

In addition, there are task templates that are used to launch shell or batch scripts.

Workflow Creation Prerequisites

To prepare for creating workflows, you need to do the following:

1. Install the Standalone Machine Agent on every machine on which you wish to run workflows. If you are using .NET, you must also install the Standalone Machine Agent, as the embedded .NET machine agent does not support workflow automation.

2.

Copyright © AppDynamics 2012-2017 Page 226 2. Set the agent's Enable Orchestration property to true. (See Standalone Machine Agent Configuration Property Reference.) 3. If you want to use any shell/batch scripts in your workflows, create them. 4. To use workflows to interact with the cloud, register your cloud provider and the images on that cloud provider that you are going to use to create your machines and make sure you have the appropriate AppDynamics cloud connector available to use. See Compute Clouds. 5. Enable the cloud auto-scaling features in your user preferences in the UI. (See the following section.) 6. Check the Task Library and make sure you have task templates you need. You can create templates if needed.

Make Cloud Auto-Scaling Visible on the Controller UI

By default, cloud auto scaling features are not visible in the Controller UI. Enable the feature using the following steps:

1. Click the Settings (gear) icon in the upper right corner and select My Preferences. 2. Check the Show Automation Features checkbox 3. Access the Cloud Auto-Scaling features from the top menu bar.

Compute Clouds

On this page:

Supported Compute Clouds Set Up Overview Deploy the Cloud Connector Register a Compute Cloud

A cloud computing workflow is a workflow that interacts with cloud-based machines, typically by creating, removing or configuring them.

Supported Compute Clouds

In order to create workflows that allow the automatic creation and deletion of cloud-based instances in response to load, the AppDynamics controller must have access to a cloud-provider-specific cloud connector extension. The AppDynamics Community provides many of these cloud connector extensions. You can download supported cloud connector extensions from the AppDynamics Exchange.

Instructions for installing the connector appears on its download page.

Copyright © AppDynamics 2012-2017 Page 227 Set Up Overview

Before you can create a workflow for cloud orchestration, you need to set up your cloud-specific components:

1. Sign up for an account with a supported cloud services provider. You will need to use the credentials you are given to register your provider in AppDynamics, so that it can act on your behalf. You must also set up your account with images of the entities you want to be able to launch in the cloud, for example, AMIs for AWS. 2. Deploy a cloud connector on the Controller so that AppDynamics can interact with your cloud provider. 3. After restarting the Controller, you can select the registered cloud as an option in the Select a Compute Cloud menu when creating a workflow. 4. Register the Compute Cloud, providing credentials for AppDynamics to use to access administrative functions in the cloud. 5. Define the kinds of machines you want to be available to use in your workflows—for example, the format, OS, and instance type—depending on the cloud type you are working on. Configure the images you want to be able to launch in the cloud. Once they are set up, the images appear in the Select an Image menu used for creating a workflow. See Machine Images and Instances.

Deploy the Cloud Connector

You need to deploy a cloud connector to provide an interface between AppDynamics and your cloud provider. If there is no existing connector for your cloud provider, you can create your own. See Custom Cloud Connectors.

For SaaS Controller Users

Contact your AppDynamics representative. The connector will be deployed for you.

For On-Premises Controller Users

1. Open the AppDynamics Exchange on the AppDynamics Community:

http://community.appdynamics.com/t5/AppDynamics-eXchange/idb-p/ extensions

2. Under the Categories column on the left side, select Cloud Connector. The connectors appear in the right column. 3. To see instructions for installing the connector, click the name of the connector in which you are interested, or simply click Dow nload. 4. Unzip the downloaded file into the /lib/connectors directory of your Controller instance. 5. Restart the Controller to have the Controller pick up the new connector.

Register a Compute Cloud

AppDynamics needs to have the credentials for your cloud provider account. Register the compute cloud as follows:

1. Click Cloud Auto-Scaling from the top menu and then Compute Clouds. 2. Click Register Compute Cloud. 3. Enter a name and description of the compute cloud instance, and choose the type of cloud from the list of connectors available to the Controller. 4. Enter the account credentials that are specific to the cloud provider. These typically include an identifier or name, the access key and the secret access key.

Go on to select your preferred machine images on Machine Images and Instances.

Machine Images and Instances

On this page:

Registering Images Test the Image Registration

Copyright © AppDynamics 2012-2017 Page 228 You must register the images you have prepared on your cloud provider so that AppDynamics knows which ones to use in launching machine instances on the cloud.

Registering Images

You need to define the environment that you want AppDynamics to use in creating cloud-based workflows. You can define the environment, or images, from the Images page, which you can access the Cloud Auto-Scaling page accessible from the top menu.

In the page, click Register Image and enter a name and description of the image, which will appear when you are creating workflows. When configuring the image, you also need to specify the following:

The operating system used by this machine. The cloud provider that you have previously registered. For instructions about registering providers, see Compute Clouds. The format that the image is in, for example, AMI. The image name and any other information required by your provider.

Test the Image Registration

After registering the image, you can test your set up by trying to launch an instance based on the image manually:

1. In the Images window, select the image that you want to use. 2. Click Launch Instance.

All running instances of this image are listed under the details for the image or in the Machines window. You can also restart or terminate the instances from this screen.

Create a Workflow and Workflow Steps

On this page:

Create a New Workflow Create Workflow Steps

This topic describes how to create a workflow in a specific application. A workflow is made up of these components: one or more workfl ow steps, which are each made up of one or more tasks. A task corresponds to some action on the machine, such as invoking an Ant script, installing a package with RPM, or invoking some other command.

Before starting, confirm you have completed all the prerequisites as listed on Cloud Auto-Scaling.

Create a New Workflow

1. In the left navigation pane of the UI, select an application. 2. Click Alert & Respond > Cloud Auto-Scaling > Workflows. 3. Click New and enter a name and brief description for the workflow. The name is how you will access the workflow when you create policies and schedules. 4. Click Create Workflow and go on to create a set of workflow steps, adding them individually until you have completed your set up. See Create Workflow Steps.

Create Workflow Steps

To create a workflow, you create a set of workflow steps. Each step has a step-type, and may have one or more tasks that it must complete, as follows:

1. Click the Steps icon to designate the step-type of the workflow:

Copyright © AppDynamics 2012-2017 Page 229 1.

2. Give the step a name and select the type of step: Create new Machines and configure Them. This step type is used to launch new instances in the cloud, and to complete any necessary configuration tasks on them once they are started. The settings to configure are: The cloud provider you wish to use from the Select a Compute Cloud dropdown. If you have not yet registered your provider, see Compute Clouds for more information on registering cloud providers. The machine image you wish to use from the Select an Image dropdown. If you have not yet registered your provider, you can click Register an Image. See Machine Images and Instances for more on registering machine images. The number of machines you wish to create with this workflow step. A timeout in the event of connection issues after which the workflow stops trying to complete the flow . Terminate Machines. This step type is used to terminate cloud instances and to complete any related tasks: The tier or tiers running on the machines to terminate. A timeout in the event of connection issues after which the workflow will stop trying to complete the flow. The number of machines that match the criteria to terminate. Configure existing Machines. This step type is used to complete tasks on cloud-based machines that are already deployed. Specify the tier or tiers to be configured Configure a specific Machine. This step type is used to complete any necessary tasks on any running instrumented machines, cloud-based or not. Manual. Use this step type if your workflow needs manual user intervention. It pauses workflow execution until required manual steps are taken. When a workflow with this step-type in it is run, you must re-start the workflow manually when you have completed your non-automated work. 3. Use the Tasks icon to add any necessary tasks to the step. See Add Tasks for Workflow Steps. 4. Click Steps icon to add the next step. 5. Click the Save icon in the upper left corner when you have completed your workflow.

Add Tasks for Workflow Steps

On this page:

The Task Library Create a Task Add a Task to a Workflow Step

Related pages:

Custom Tasks Create a Workflow and Workflow Steps

Tasks are the basic unit of execution for a workflow. You create tasks from templates in the task library. Once they are created and customized they can be added to each step of a workflow.

The Task Library

You use the Task Library to select task templates from which to make your tasks. There are three kinds of templates in the Task Library:

Default task templates. AppDynamics ships with templates for many common tasks. Templates that launch your shell or batch scripts. You can automate the running of scripts in blocking or non-blocking mode. See Create Custom Tasks Using Shell or Batch Scripts

Copyright © AppDynamics 2012-2017 Page 230 Task templates that you create. See Create a New Task Template Using XML.

Create a Task

1. From the left application menu, click Cloud Auto-Scaling > Tasks. 2. Click New. The Create Task window opens. 3. Give the task a name. 4. Scroll through the task template list to find a suitable template. 5. Click the template name: the parameter UI is displayed.

6. Select the Input Parameters tab to specify what values should serve as the input to the task. Asterisks indicate the required parameters. If appropriate, you can use a workflow variable which is evaluated at runtime. Click the icon to the right of the field to access the Select Variable popup. The types of workflow variables include Shared Variables: This variable stores a value that is accessible by more than one task. These can also be directly using Cloud Auto-Scaling > Shared Variables. Tier IP Addresses: This variable returns a comma separated list of IP addresses of nodes in a tier. Machine IP: This variable returns the IP address of the machine where the task is executing. Task Output: This variable returns a value which is output from another task. To use a task's output, the Task must define output parameters, and it must have executed before the Task that uses it. 7. Select the Output Parameter Binding tab and enter values if needed.

8.

Copyright © AppDynamics 2012-2017 Page 231 8. Click Create Task. 9. Repeat to create as many tasks as necessary.

A list of tasks is created. You can later edit or remove a task from the Task Window.

Add a Task to a Workflow Step

1. From the left application menu, click Cloud Auto-Scaling > Workflows. 2. To add the task to an existing workflow, select the workflow and click Edit. Or you can create a new workflow. See Create a Workflow and Workflow Steps. 3. Click the Task + icon in the workflow step to which you wish to add the task. 4. Select the task to add to your workflow from the list.

5. Click Add Selected Task to Workflow.

Custom Tasks

On this page:

Create a New Task Template Using XML Create Custom Tasks Using Shell or Batch Scripts Create Your Own Task Launcher Based on a Java Class

There are three ways to create custom tasks for your workflows.

Create a New Task Template Using XML

AppDynamics provides a set of default task templates. You can add your own task templates using XML and the default task launcher, AntTask.

XML File Types

You must create two XML files to create a new template. You must use these names.

task.xml: defines the argument list for the input/output parameter tabs on the UI and links it to the task launcher run.xml: defines the syntax for running the task command

The Task Library

Look through the Task Library to find a template similar to the one you are creating and download the template zip file.

1. In the left navigation pane of the UI, select the appropriate application 2. Click Cloud Auto-Scaling > Tasks 3. Click New. The Create Task dialog opens. 4. Click View All Task Templates at the bottom left. The Task Template Library opens in the main window. 5. Click Cancel to close the Create Task dialog. 6.

Copyright © AppDynamics 2012-2017 Page 232 6. Find a similar task template. 7. In the Task Zip File, click the zipfile name. 8. Open the zip file. It should contain both a run.xml and a task.xml. If it doesn't, download a different zip file.

Create your task.xml file

Using the downloaded task.xml file as a model, create your task.xml. See the BZip2 sample below.

Choose a name, display-name, and a description to show up in the task template list. The type is always java. The task-argument element contains as many argument elements as the task command has arguments. The name attribute is the name displayed in the Input Parameters tab UI. The description attribute is the content of the tool tip displayed in the Input Parameters tab UI. The is-required attribute indicates if the argument is required Additional attributes, like type, allowed values, etc. depend on the task. The java-task and impl-class elements always contain com.singularity.ee.agent.systemagent.task.ant.AntTask. This is the task launcher class.

BZip2 BZip2 Zips to a bzip file using BZip2 algorithm java com.singularity.ee.agent.systemagent.task.ant.AntTask

Create your run.xml file

Using the downloaded run.xml file as a model, create your run.xml. See the BZip2 sample below.

You must begin this file with an XML declaration. The project, property, and target names must be as they are in the sample. The file args.properties is populated based on input from the Input Parameters UI The element inside the target element is the syntax to run the task command. The arguments come from args.properties.

Copyright © AppDynamics 2012-2017 Page 233 Package the XML files as a zip archive

Add the task.xml and run.xml files to a zip archive and name this zip archive the name of the task.

Add the zip archive as a task template

1. From the Cloud Auto-Scaling menu select Task Library. 2. Click New. The Create Task Template dialog appears.

3. Provide the name and description for the task. 4. Upload the Zip file containing the XML task files. 5. Click Create Task Template.

AppDynamics adds the task template to the Task Library. See Add Tasks for Workflow Steps for using this template.

Create Custom Tasks Using Shell or Batch Scripts

You can also create custom tasks by using a shell or batch script.

1. Create the shell or batch script file on the system. 2. In the left nav bar of the UI, select an application. 3. Click Cloud Auto-Scaling > Tasks. 4. Click New +. The Create Task window opens. 5. Give the task a name. 6. From the task list, select the task template ExecuteShellScript or ExecuteShellScriptNonBlocking, as necessary.

Copyright © AppDynamics 2012-2017 Page 234 6.

7. Browse to find the script file. 8. Click Create Task.

Create Your Own Task Launcher Based on a Java Class

If you need to create a completely new task that cannot be accomplished by using the AntTask launcher or by using a shell or batch script, you must create a Java class that implements the public com.singularity.ee.agent.systemagent.api.ITask interface bundled with the Machine Agent.

Your class must implement the following two methods:

execute(), which takes input in the form of name-value pairs and the execution context for the task, and returns the task output stop(), which stops the process immediately

This class is referenced in the task.xml file, in the element.

AppDynamics highly recommends using shell/batch scripts instead whenever possible. See for more information Create Custom Tasks Using Shell or Batch Scripts.

Custom Cloud Connectors

On this page:

Implementation Steps Metadata

Related pages:

Build an AppDynamics Extension

The AppDynamics IConnector interface allow you to implement custom orchestration functionalities, such as creating, destroying, restarting, configuring, and validating machine and image instances directly from the AppDynamics Controller user interface. It lets you create custom compute cloud connectors to support additional cloud service providers.

A custom connector is made up of your implementation of the IConnector interface and XML definition files for the connector. This page describes how to deploy the connector along with the format for the definition files.

Copyright © AppDynamics 2012-2017 Page 235 Implementation Steps

1. Implement the IConnector interface in your custom connector class. 2. Compile the connector. 3. Package the compiled JAR and the XML metadata (see below) into a single directory. 4. Put the directory in: /lib/connectors. 5. Restart the Controller.

The AppDynamics Controller registers the cloud auto-scaling extension at startup. You should see the new connector in the Compute Clouds and Images tab under Systems.

Metadata

The Metadata provides the Controller with the necessary information to register the new compute center, image, and image repository. The Controller uses the metadata to dynamically form the user interface and link the IConnector implementation to the Controller. The three XML files must be named:

compute-center-types image-repository-types image-types

To view the compute-center-types XML Schema, click the following expander:

Click to view the Compute Center Types XML Schema

Copyright © AppDynamics 2012-2017 Page 236

Copyright © AppDynamics 2012-2017 Page 237

Element Definitions compute-center-type

This is a list of compute-center-types associated with the IConnector implementation. Each compute-center-type will be registered under the Compute Clouds tab in the UI, for example:

Name: Compute Center Type name. Description: Compute Center Type name shown on the Controller GUI.

For example:

Amazon Elastic Computing Cloud

Connector-impl-class-name

This is the full name of the IConnector implementation class.

Property-definition

This is a list of property definitions for the compute center. These definitions are used to dynamically generate the property text fields such as AWS Account ID, Access Key, and Secret Access Key.

Name: Name of the property field. Such as “AWS Account ID” in Figure 1. Description: Description of the property. Required: Checks if the property field cannot be empty. State true/false. Type: Type of the property field. STRING/FILE Default-string-value: Default initialization value of the STRING property. String-max-length: Maximum number of characters allowed to be stored in the property field Allowed-string-values: List of allowed string values, delimited by comma. If specified this property will be displayed as a drop down list of all the allowed string values. Default-file-value: Default FILE type property value

For example, using the “AWS Account” property:

Copyright © AppDynamics 2012-2017 Page 238 AWS Account ID AWS Account ID true STRING 80

Machine-descriptor-definition

This contains a list of property definitions associated with a machine instance object of that Compute Center type. These properties can be seen in Launch Instance window under the Images tab. For example, an EC2 Launch Instance Window showing Amazon Elastic Computing Cloud Compute Type MachineDescriptor Definitions:

image-repository-types

When each compute cloud is registered, a corresponding imagestore object is created. The imagestore name, description, connec tor-impl-class-name, and property-definitions should mirror the Compute Center property-definitions.

To view the image-repository-types XML schema, click the following expander:

Click to view the Image Repository Types XML schema

Copyright © AppDynamics 2012-2017 Page 239

Copyright © AppDynamics 2012-2017 Page 240 Image-types

Each image type defines an image type found under the Images tab. For example:

To view the image-types XML schema, click the following expander:

Click to view the image-types XML schema

Copyright © AppDynamics 2012-2017 Page 241

Copyright © AppDynamics 2012-2017 Page 242

Platform as a Service Integrations

The AppDynamics Application Intelligence Platform integrates with Platform as a Service (PaaS) providers, providing easily implemented AppDynamics monitoring for cloud or PaaS-hosted applications.

In general, the integration allows developers to add AppDynamics app agent instrumentation to their applications as additional service when provisioning their application host instance. In some cases, AppDynamics can provide machine-level monitoring information for the virtual platform and supporting infrastructure.

The following section lists the PaaS providers with which the AppDynamics platform integrates. For details on how to use the AppDynamics specific platforms, see the documentation listed.

Supported Platform as a Service (PaaS) Providers

PaaS Provider Description

Pivotal Cloud Application and machine monitoring in PCF environments. AppDynamics can provide built in support for these Foundry (PCF) Cloud Foundry buildpacks:

Java Buildpack 3.4 and higher. (See Using AppDynamics with Java Applications on Pivotal Cloud for a walkthrough of using the Java buildpack.) PHP Buildpack 4.0+. (See Using AppDynamics with PHP Applications on Pivotal Cloud Foundry for a walkthrough of using the PHP buildpack.) For more and updated information and a link to the AppDynamics tile page, see the AppDynamics APM Tile documentation.

Red Hat Openshift 3 Docker images with built-in AppDynamics monitoring support for JBoss EAP 6.4 and WildFly 8.1.

For documentation and download information, see the AppDynamics Java APM Agent page on the Red Hat Customer Portal.

Integration Modules

The AppDynamics Community Exchange includes many extensions that supplement the capabilities of the AppDynamics platform. Documentation on setting up and using the integration module appears with the module.

The following integration modules are built into the AppDynamics platform and described at the pages linked to here.

Integration Module For more information...

Apica Apica & AppDynamics

DB CAM Integrate AppDynamics with DB CAM

Splunk Integrate AppDynamics with Splunk

AppDynamics for Databases Integrate AppDynamics for Databases with AppDynamics Pro 3.7 and higher

Integrate AppDynamics with DB CAM

Copyright © AppDynamics 2012-2017 Page 243 On this page:

Prerequisites for DB CAM Integration Configuring AppDynamics to Interface with DB CAM Linking to DB CAM from AppDynamics

Related pages:

Access the Administration Console Monitor Databases DB CAM

You can link to DB CAM for any DB CAM-monitored database that is discovered by AppDynamics. This integration provides access to the database performance metrics provided by DB CAM.

Prerequisites for DB CAM Integration

To use this integration you must have a DB CAM license. DB CAM must be configured to monitor the databases that you want to link to from AppDynamics.

Configuring AppDynamics to Interface with DB CAM

You configure DB CAM integration at the account level and at the app agent level.

Configure DB CAM at the Account Level

Configure the integration at the account level using the Administration Console at

:/controller/admin.jsp

Create two new properties as name-value pairs in each account for which you want to enable DB CAM integration.

Configure One AppDynamics Account for DB CAM Integration

1. Login to the Administrator Console with the administrator root password. 2. Select Accounts. 3. In the accounts list, double-click the account for which you want to configure DB CAM integration. 4. In the upper right corner of the account screen, click Additional Account Properties. 5. In the Additional Account Properties screen, click Add Property to add the DB_CAM_URL property. 6. In the left field enter "DB_CAM_URL". 7. In the right field enter the URL of the AppDynamics Controller that you are configuring using the syntax:

http[s]://:

8. Click Add Property again to add the DB_CAM_ENABLED property. 9. In the left field enter "DB_CAM_ENABLED". 10. In the right field enter "true". 11. Click OK to save the properties. 12. Log out of the Administrator Console.

Copyright © AppDynamics 2012-2017 Page 244 Configure DB CAM at the Agent Level

For each app agent for which you want to enable access to deep diagnostics from DB CAM:

1. Open the AppServerAgent/conf/app-agent-config.xml file for the app agent. Locate the TransactionMonitoringService element:

2. Add the jdbc-dbcam-integration-enabled property for the service:

BCIEngine,SnapshotService

3. Save the file.

Linking to DB CAM from AppDynamics

You can link to DB CAM from any AppDynamics flow map that displays a discovered DB CAM-monitored database. The flow map could be in a dashboard or a transaction snapshot.

If you link to DB CAM from a dashboard you will land in the DB CAM instance dashboard. If you link to be DB CAM from a transaction snapshot, you will land in the DB CAM Session Drill Down screen.

To Link to DB CAM from a Dashboard

1. In the flow map of a dashboard, right-click on the link below a database icon. 2. Click Search in DBCam.

Copyright © AppDynamics 2012-2017 Page 245 DB CAM launches and displays the instance dashboard for the selected database.

To Link to DB CAM from a Transaction Snapshot

1. In the flow map of a transaction snapshot, right-click on the link below the database icon. 2. Click Search in DBCam.

DB CAM launches and displays the session drill down for the selected database.

Integrate AppDynamics with Splunk

Copyright © AppDynamics 2012-2017 Page 246 On this page:

Configuring Splunk Integration Launching a Splunk Search from AppDynamics

Related pages:

Splunkbase Apps Splunk documentation

The AppDynamics-Splunk integration gives you a single, cohesive view of data gathered by AppDynamics and Splunk. The AppDynamics-Splunk integration gives you the ability to:

Launch Splunk searches using auto-populated queries from the AppDynamics Console based on criteria such as time ranges and node IP address. Push notifications on policy violations and events from AppDynamics to Splunk. Mine performance data from AppDynamics using the Controller REST API and push it into Splunk.

This topic covers the Splunk search capability that is launched from AppDynamics. Pushing AppDynamics notifications to Splunk and mining performance data from AppDynamics to Splunk requires the Splunk extension. See the extension documentation for more information on those capabilties.

Configuring Splunk Integration

1. To enable and configure Splunk integration, log in the Controller UI as an administrator. 2. Click Settings > Administration. 3. In the Administration window, click the Integration tab and then Splunk from the Integrations list. 4. Check the Enabled check box to enable the integration. 5. For the URL, enter the Splunk URL and port number.

6. (Optional) Enter Extra Query Parameters. These parameters are appended to each Splunk search initiated from AppDynamics. 7. Click Save.

Launching a Splunk Search from AppDynamics

You can launch a search of your Splunk logs for a specific time frame associated with a transaction snapshot from several points in AppDynamics.

To be able to launch a Splunk search, you need:

Splunk credentials. The first time that you launch a Splunk search, you are prompted to provide your Splunk credentials. After

Copyright © AppDynamics 2012-2017 Page 247 this, your credentials are cached by the browser. Splunk Server must be running. Your browser is configured to allow pop-ups.

Enable Pop-ups The first time that you access the Splunk Server, you are prompted to log in. If nothing happens, it is most likely that either your browser is blocking the Splunk login pop-up or the Splunk Server is not running.

You can access the Search Splunk option from the node dashboard as follows:

1. Navigate to a node dashboard. 2. From the Actions menu, select the Search Splunk item in the Integrations section.

In the list of transaction snapshots:

1. From the business transaction dashboard, select the Transaction Snapshot tab. 2. Select a transaction snapshot and right-click to access a list of options or click More Actions to see Search Splunk in the list of options. More Actions:

Copyright © AppDynamics 2012-2017 Page 248