OPC DA Interface Failover Manual

Total Page:16

File Type:pdf, Size:1020Kb

OPC DA Interface Failover Manual

OPC DA Interface Failover Manual

for OPC DA Interface Version 2.3.14.0-2.3.17.x OSIsoft, LLC 777 Davis St., Suite 250 San Leandro, CA 94577 USA Tel: (01) 510-297-5800 Fax: (01) 510-357-8136 Web: http://www.osisoft.com

OSIsoft Australia • Perth, Australia OSIsoft Europe GmbH • Frankfurt, Germany OSIsoft Asia Pte Ltd. • Singapore OSIsoft Canada ULC • Montreal & Calgary, Canada OSIsoft, LLC Representative Office • Shanghai, People’s Republic of China OSIsoft Japan KK • Tokyo, Japan OSIsoft Mexico S. De R.L. De C.V. • Mexico City, Mexico OSIsoft do Brasil Sistemas Ltda. • Sao Paulo, Brazil

OPC DA Interface Failover Manual Copyright: © 2002-2018 OSIsoft, LLC. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, mechanical, photocopying, recording, or otherwise, without the prior written permission of OSIsoft, LLC.

OSIsoft, the OSIsoft logo and logotype, PI Analytics, PI ProcessBook, PI DataLink, ProcessPoint, Sigmafine, Analysis Framework, IT Monitor, MCN Health Monitor, PI System, PI ActiveView, PI ACE, PI AlarmView, PI BatchView, PI Data Services, PI Manual Logger, PI ProfileView, PI WebParts, ProTRAQ, RLINK, RtAnalytics, RtBaseline, RtPortal, RtPM, RtReports and RtWebParts are all trademarks of OSIsoft, LLC. All other trademarks or trade names used herein are the property of their respective owners.

U.S. GOVERNMENT RIGHTS Use, duplication or disclosure by the U.S. Government is subject to restrictions set forth in the OSIsoft, LLC license agreement and as provided in DFARS 227.7202, DFARS 252.227-7013, FAR 12.212, FAR 52.227, as applicable. OSIsoft, LLC. Published: 07/2011 Table of Contents

Chapter 1. Introduction...... 1 Reference Manuals...... 1 Diagram of Hardware Connection...... 2 Server-level failover configuration...... 2 Interface-level failover using Microsoft clustering configuration...... 2

Chapter 2. Principles of Operation...... 3

Chapter 3. Server-Level Failover...... 5 Server-Level Failover Options using ICU Control...... 6 Server-Level Failover Configurations...... 7 Inactive Server Does not Allow Connections...... 7 Inactive Server Leaves OPC_STATUS_RUNNING State...... 7 Inactive Server sets Quality to BAD...... 8 Watchdog Tags...... 8 Isolated Watchdog Tags...... 9 Server-specific Watchdog Tags...... 11 Multiple Interfaces...... 11 Logging the Current Server...... 12 Controlling Failover Timing...... 12 Logfile Messages for Server-Level Failover...... 13

Chapter 4. Interface-Level Failover Using Microsoft Clustering...... 15 Choosing a Cluster Mode...... 16 Failover Mode...... 16 How It Works...... 17 Configuring APIOnline...... 18 Multiple Interfaces...... 18 OPCStopStat and Failover...... 20 Checklist for Cluster Configuration...... 20 Configuring the Interface for Cluster Failover...... 22 One Interface...... 22 Three Interfaces...... 23 Buffering Data on Cluster Nodes...... 24 Group and Resource Creation Using Cluster Administrator...... 24 Cluster Group Configuration...... 24 Installation of the Resources...... 28 Logfile Messages for Interface-Level Failover...... 30

Chapter 5. Using Combination of Server- and Interface-Level Failover...... 31

Appendix A. Technical Support and Resources...... 33

OPC DA Interface Failover Manual iii Table of Contents

Before You Call or Write for Help...... 33 Help Desk and Telephone Support...... 33 Search Support...... 34 Email-based Technical Support...... 34 Online Technical Support...... 34 Remote Access...... 35 On-site Service...... 35 Knowledge Center...... 35 Upgrades...... 35

Appendix B. Revision History...... 37

iv Chapter 1. Introduction

This is a supplemental document for configuring the OPC DA Interface. It covers configuring and managing the interface for redundancy of the OPC server, the OPC DA interface, or both. It is intended to be used in conjunction with the OPC DA Interface Manual. For server-level failover, no special hardware or software is required. Interface-level failover using Microsoft clustering requires a Microsoft Cluster. Interface-level failover using UniInt does not require any special hardware or software. Configuration of the interface for UniInt failover is not covered in this manual. It is documented in the OPC DA Interface Manual. In this manual each type of redundancy will be addressed separately and briefly examined being used together. Note that all the command-line parameters discussed in this document can be configured using the Interface Configuration Utility (PI ICU). The ICU simplifies configuration and maintenance, and is strongly recommended. PI ICU can only be used for interfaces that are collecting data for PI Systems, version 3.3 and greater.

Reference Manuals

OSIsoft  OPC DA Interface Manual  UniInt Interface User Manual

OPC DA Interface Failover Manual 1 Introduction

Diagram of Hardware Connection

Server-level failover configuration

Interface-level failover using Microsoft clustering configuration

2 Chapter 2. Principles of Operation

The OPC DA interface provides redundancy for both the OPC server and the interface itself. For server-level failover, the interface can be configured to change to another OPC Server when the current server no longer serves data, when an OPC item changes value or quality, or when the OPC Server changes state. This allows the data collection process to be controlled at the lowest possible level, and ensures that data collection will continue even if the connection to the PI System fails. For interface-level failover, two copies of the interface are running at the same time with only one sending data to the PI System. There are two types of interface-level failover supported by this interface. One uses Microsoft clustering and the other uses the UniInt failover mechanism. This manual describes configuration using Microsoft clustering. When using Microsoft clustering, the cluster controls which copy of the interface is actually collecting data at any given time. Since the OPC Server may not be cluster-aware, there are several modes that can be configured, to ensure the least possible data loss in the event of a failover, without putting undue stress on the underlying data collection system. This type of failover is not highly recommended unless the user has other reasons for doing it. The server-level failover can be combined with any type of interface-level failover to achieve redundancy at both levels of data collection, so that even the loss of both an OPC Server and one OPC DA Interface will not interrupt data collection. However, both types of interface- level failover cannot be used at the same time.

OPC DA Interface Failover Manual 3 Chapter 3. Server-Level Failover

The basis of server-level failover is that the interface should always be connected to a server that can provide data. An issue concerns how the interface knows when it should try to connect to another server. There are several ways in which an OPC Server may indicate that it is not able to serve data.  It does not accept connections. This is the simplest one to deal with. There is nothing to configure except the name of the alternate server.  It changes state when it is not active, usually to OPC_STATUS_SUSPENDED. The interface can be configured to failover to another server when the current server leaves the RUNNING state.  It sends bad quality for all tags. To use, an OPC item must be defined which will always have a GOOD quality except when the server is not serving data.  It has one or more OPC items that have a specific value when the server can serve data and another specific value when it cannot. With this version, it may be necessary to use the Transformation and Scaling ability of the interface, but as long as there is some way to translate the not-active value to a zero and the active value to >0, these OPC items can be used for Watchdog tags. It is possible to specify multiple tags as watchdogs, and specify a minimum value that defines an active server, so that the loss of some server functionality (for instance, one or two OPC Servers are not working) will not cause failover, but a falling below a specified minimum will trigger failover to another server.  It has one or more OPC items that have GOOD quality when the server can serve data and BAD quality when it cannot. One watchdog tag or multiple watchdog tags can be specified, in addition to specifying the maximum number of watchdog tags that can have BAD quality on the active server without triggering failover.  It has an OPC Item that has a specific, known value when a given server can serve data and a different known value when that server cannot serve data. In these cases, there is always one Item for each server, and two Watchdog tags are used to control which server is active. This configuration is referred to as “server-specific watchdogs”, because the watchdog Item refers to a given server’s current status, regardless of which server the Item value was read from.

Note: Special handling is also included for Honeywell Plantscape servers, as several customers have had difficulty in getting server-level failover to work properly with these servers. The /HWPS flag tells the interface to failover when it receives an error code of 0xE00483FD or 0xE00483FC on any tag.

OPC DA Interface Failover Manual 4 The following table lists the command-line parameters used to control server-level failover. The next sections explain how to configure the interface for each of the cases above, using these parameters, and how to use the timing parameters to get the least data loss with the most reliability.

Parameter Description /BACKUP The name and location of the backup OPC server /CS The string tag into which should be written the name of the currently active server. /FT The number of seconds to try to connect, before switching to the backup server. /NI The number of interfaces running on this node. /SW The number of seconds to wait for RUNNING state, before switching to the backup server. /WD Watchdog tag specifications. /WQ Failover if watchdog tag has bad quality or any error. /WS Failover if the server leaves the RUNNING state.

Server-Level Failover Options using ICU Control

 Backup OPC Server Node Name – The name or IP address of the back-up OPC Server Node (/BACKUP).  List Servers – Click this button to get a list of OPC Server Names from systems found in the Backup OPC Server Node Name field.

OPC DA Interface Failover Manual 5 Server-Level Failover

 Backup OPC Server Name – The registered name of the backup OPC Server on the above node (/BACKUP).  Number of Interfaces on this Node – The count of instances of the OPC DA interface that are running on this node (/NI=#).  Switch to Backup Delay (sec) – The number of seconds to try to connect before switching to the backup server (/FT=#).  Wait for RUNNING State (sec) – The number of seconds to wait for the RUNNING status before switching to the backup server (/SW=#).  Current Active Server Tag – The string tag into which should be written the name of the currently active server (/CS=tag).  Primary Server Watchdog Tag – Watchdog tag for the primary server (/WD1=tag).  Backup Server Watchdog Tag – Watchdog tag for the backup server (/WD2=tag).  Multiple Watchdog Tag Trigger Sum – When using multiple watchdog tags, failover will be triggered if the sum of the value of these tags drops below the value entered in this box (/WD=#).  Maximum number of Tags which can have Bad Quality or Any Error without triggering Failover –Trigger a failover if more than # number of watchdog tags have Bad Quality or Any Error. If one watchdog tag is configured set /wq=0. If more than one watchdog tag is configured, then # can be set from 0 to the number of watchdog tags configured minus 1 (/WQ=#).  Failover if Server Leaves RUNNING State – (/WS=1).

Server-Level Failover Configurations

These are the server-level failover options supported by the interface. This section does not deal with timing of failover at all, only with how failover is triggered. See the section Controlling Failover Timing for timing considerations.

Inactive Server Does not Allow Connections

Use the /BACKUP parameter to provide the name of the other OPC server. If the interface cannot connect to one server, it will try the other one. The selection of which server is active is completely managed by the servers. /SERVER=OSI.DA.1 /BACKUP=othernode::OSI.DA.1

Inactive Server Leaves OPC_STATUS_RUNNING State

Use the /WS parameter to control this. After the interface is connected to a server and collecting data, the server’s state is checked every 30 seconds. With the /WS flag set, if the server leaves the RUNNING state, the interface will disconnect from the current server and try to connect to the other server.

6 /WS=1

Inactive Server sets Quality to BAD

Some servers indicate only that they are not the active server by setting the quality of some, or all, their items to BAD. This can be used to trigger the failover of the interface to the other server, but the quality of the tag being used for a watchdog must be bad only when the interface should failover. /WQ=# directs the interface to failover to the other server if more than # number of watchdog tags have Bad Quality or Any Error. Note that v1.0a servers do not return error codes for individual items, so for version 1.0a servers this parameter only checks the quality of the value sent from the server. If one watchdog tag is configured, set /wq=0. If more than one watchdog tag is configured, then # can be set from 0 to the number of watchdog tags configured, minus 1. /WQ= to the number of watchdog tags minus 1.

Watchdog Tags

For server-level failover, a specific PI tag can be defined as a watchdog tag. The OPC item that this tag reads must have a specific, known value when the server is able to serve data and another specific, known value when the server is unable to serve data. It is called a watchdog tag because its value changes to announce a change in the server status. The remaining configuration options use Watchdog tags. Watchdog tags allow the OPC servers to notify the interface which server is the currently active server. If the value of the watchdog tag representing a server is greater than zero, that server is the active server. There are two different modes for using watchdog tags: isolated mode and server-specific mode. In isolated mode, each server knows only its own state. The items being used for these watchdog tags represent the current state of the server (such as backup state or active state). These items could have different values for the two servers at any given time. In server- specific mode, both servers know the state of the other server. Because of this, the items being used for the watchdog tags should match. In general, server-specific watchdog tags are a more robust failover model. Note that watchdog tags are read in the same way as normal data tags, and the values are passed along to PI. The PI tags must be configured as integer tags, but Location2 settings can be used to read other datatypes into the integer tags. Also, the same scaling and transformation formulas are applied to the watchdog tags as for ordinary tags. Therefore, using an integer PI tag and scaling parameters, the interface can recognize values of -3 and 7 as 0 and 10, respectively. Any transformation that results in an integer value of 0 for backup and >0 for active can be used in a watchdog tag. The watchdog tags should be configured to be Advise tags, if the server can support Advise tags; otherwise they should be put into a scan class with a short scan period. Whenever the values are received from the server, whether polled or advised, the values are checked to see if they match the current state of the interface. If the watchdog tags indicate that the interface should be connected to the other server, the interface will disconnect from the current server and attempt to connect to the other server. If it cannot connect successfully to the other server within the configured failtime given by the /FT parameter on the command-line, it will revert to the original server and try it again, in case that server has now become the active server.

OPC DA Interface Failover Manual 7 Server-Level Failover

Isolated Watchdog Tags

With isolated watchdog tags, each server knows only its own state. There are two ways to use this model. The simple version has one tag, which by itself shows whether the server is ready to serve data. Multiple tags can also be used to reflect the server’s connections to its underlying data systems.

One tag The same Item in each server reflects the state of that server. The interface will read the Item as an ordinary data value, and if the value is not greater than zero, the interface will disconnect from the server and attempt to connect to the other server. At least one of the servers should always return a 1 as the current value for this Item. The watchdog tag is identified to the interface with the /WD1 parameter. With this model, the /WD2 parameter is not used. If /WD2 is specified and not /WD1, it will be ignored by the interface.

OPC Server 1 OPC Server 2

Watchdog1 = 1 Watchdog1 = 0

/WD1=ServerActive PI tag ServerActive has Instrumenttag = Watchdog1

Multiple Watchdog Tags Multiple tags can be defined as watchdog tags, and the sum of their values determines whether the server is active or not. In this model, the server may have access to flags that show whether it is collecting data from various sources. As long as some number of those flags show data collection, the server should continue to be used, but if enough of those flags show a connection loss, the other server should be tried to see if it has access to the underlying devices. With this model, the watchdog tags are not specified on the command line. Instead, for each of those tags, Location3 is set to be a 3 for polled tags, or 4 for Advise tags. A minimum value is also specified that defines an active server. The interface will assume that the value of each watchdog tag is 1 at startup, and each time it gets a new value from the server, it will subtract the old value from the watchdog sum and then add the new value to it.

8 OPC Server 1 OPC Server 2

PLC1 = 1 PLC1 = 1 PLC2 = 1 PLC2 = 0 PLC3 = 1 PLC3 = 1

/WD=2 PI tag ServerActive1 has Instrumenttag = PLC1 PI tag ServerActive2 has Instrumenttag = PLC2 PI tag ServerActive3 has Instrumenttag = PLC3 As long as the sum of these three tags is at least 2, the interface will continue to collect data from the connected server. If the sum of the values goes below 2, the interface will failover to the other server.

Using Quality The interface can also use the quality of the watchdog tags to decide which server is active. Just as above, one or more tags can be specified as watchdog tags, and the maximum number of those tags is specified for which the interface either gets an error or a BAD quality. If more than the maximum number of tags have an error or a BAD quality, the interface will failover to the other server. For one tag use the /WD1=tagname method and set /WQ=0.

OPC Server 1 OPC Server 2

Watchdog1 = 1 Watchdog1 = BAD quality or error

/WD1=ServerActive /WQ=0 PI tag ServerActive has Instrumenttag = Watchdog1 To use the quality of multiple tags, set Location3 for those tags to either 3 (for polled tags) or 4 (for advise tags), and specify the maximum number of tags that can have BAD or error status on the active server.

OPC Server 1 OPC Server 2

PLC1 = 1 PLC1 = 1 PLC2 = BAD or PLC2 = 1 error PLC3 = 1 PLC3 = 1

OPC DA Interface Failover Manual 9 Server-Level Failover

/WQ=1 PI tag ServerActive1 has Instrumenttag = PLC1 PI tag ServerActive2 has Instrumenttag = PLC2 PI tag ServerActive3 has Instrumenttag = PLC3 Note that here the maximum error or bad quality count is being specified for an active server. In the above example, both of these servers could be active, but if server 2 loses another PLC, it will no longer be able to be the active server.

Server-specific Watchdog Tags

For the server-specific tag model, two items must be defined in both servers that will give the status of the servers. Two PI tags are defined to read those values, and the tags are defined to the interface by the /WD1 and /WD2 parameters.

OPC Server 1 OPC Server 2

Watchdog1 = 1 Watchdog1 = 1

Watchdog2 = 0 Watchdog2 = 0

/WD1=Server1Active /WD2=Server2Active PI tag Server1Active has Instrumenttag = Watchdog1 PI tag Server2Active has Instrumenttag = Watchdog2 It is important that the two OPC Servers agree on which server is active and which is in backup mode. In active mode the OPC Server should serve data to the client, while in backup mode it should wait until the primary server fails. At any given time, only one of the watchdog tags should be zero, and it must be the same tag on both servers, unless only one server is accepting connections and serving data. If both watchdog tags are zero, neither server will be seen as active, and data collection will stop until one watchdog tag becomes nonzero. If both watchdog tags are greater than zero, the interface will remain connected to whichever server it is currently getting data from.

Multiple Interfaces

If more than one instance of the OPC DA interface is running on the same node, PI tags will need to be created for each instance of the interface. Since each interface scans only those PI tags that belong to the interface’s unique point source, one set of watchdog tags with one pointsource does not get picked up by another instance of the interface with a different point source. In short, although there need be only 1 or 2 watchdog tag Items in each OPC server, a separate pair of PI tags (with the appropriate pointsource) must be configured in PI for each instance of the interface.

10 OPC Server 1 OPC Server 2

Watchdog1 = 1 Watchdog1 = 1

Watchdog2 = 0 Watchdog2 = 0

Interface A, with point source A: /WD1=Server1Active /WD2=Server2Active PI tag Server1Active has Instrumenttag = Watchdog1 PI tag Server2Active has Instrumenttag = Watchdog2 Interface B, with point Source B: /WD1=B1Active /WD2=B2Active PI tag Bactive has Instrumenttag = Watchdog1 PI tag Bactive has Instrumenttag = Watchdog2

Logging the Current Server

A PI string tag can be configured to receive the name of the server that is currently providing data. Doing this will allow the tracing of which server was active when, and when the interface failed over from one server to another. The /CS parameter is used to specify the tagname of the PI string tag which has been created to hold the information. This tag should use a Manual or Lab point source, or some point source which is not used, as the interface does not treat it as a normal interface tag, but simply writes to it when the server connection changes from one server to the other. For this reason, tag edits for this tag will also not be seen by the interface until the interface is restarted, and will not matter even then, since no edits to this tag will change the interface’s behavior.

Controlling Failover Timing

You can use three parameters to control the timing of server-level failover. The interface should not wait too long before recognizing that the current server is no longer available, but it should not switch over to the other server too quickly only to find that it is not active, and have to switch back to the server it was on before. The default values for the Failover Time and State Wait are reasonable, and should be kept. However, you can change them to either wait longer before failing over to the other server, or give up on connecting to a server more quickly. The Failover Time setting (/FT=#) defines the number of seconds to keep trying to reconnect to the current server, before giving up and failing over to the other server. This parameter does not affect how the interface determines that the current server is no longer responding. It only affects how long the interface will try to connect to one server before it gives up and tries the other server. This setting also affects how often the server state is checked and when to update the clock offset. If Failover Time is set to less than 30 seconds, the interface will check the server state every Failover Time seconds. To give the local system more time to handle requests when there are a number of interfaces running on one system, use the Number of Interfaces parameter (/NI=#). Use this as a multiplier for the Failover Time. This is most useful when the interface is running on the

OPC DA Interface Failover Manual 11 Server-Level Failover

same system as the OPC server, and it should be set to the number of copies of the interface that are running on the system. The reason for this is that most OPC servers will be slower to respond when there are multiple clients all trying to connect at the same time. For this reason, it is suggested that the RESTART_DELAY parameter be used to stagger the startup of the interfaces, if more than one copy is running on a system. The /NI parameter will also smooth the startup and give the OPC server more time to respond. Use the State Wait parameter (/SW=#) to specify how many seconds the interface will wait for the server to enter the RUNNING state before giving up and trying to connect to another server. Without this switch, after the interface connects, it will wait indefinitely for the server to enter the RUNNING state. This parameter is highly variable depending on the server, with some entering the RUNNING state almost immediately and others requiring minutes to verify connections to all the remote connections before entering the RUNNING state.

Logfile Messages for Server-Level Failover

The interface will log a number of informational messages concerning the configuration and operation of server-level failover. First, at startup, when it recognizes the parameters given for failover, it will log an acknowledgement of them, so the configuration can be verified that it has been read correctly. Parameters that are not understood are ignored, as though they had not been used at all. The following are the startup informational messages for each parameter. /CS Current Server tag is %s Can’t find server tag, ignoring Server tag is not String tag, ignoring /NI /NI, number of interfaces on the node, must be greater than 0. Argument ignored. /WD1 Can’t find Watchdog1 tag %s, ignoring Can’t get Watchdog1 tag type, ignoring Watchdog1 tag is not Integer tag, ignoring Watchdog1 tag will be: %s: Note that the tagname above is delimited by colons. /WD2 Can’t find Watchdog2 tag %s, ignoring watchdogs Can’t get Watchdog2 tag type, ignoring watchdogs Watchdog2 tag is not Integer tag, ignoring Watchdog2 tag will be: %s: Note that the tagname above is delimited by colons. Further, when the interface fails over, it will log the reason, if the reason is something other than the current connection failing. /WS Server left RUNNING state, failing over Other messages will only show if debugging is turned on, because the assumption is that the same information is available in a PI tag or from other messages in the log file. If the interface times out on a call to the server, or gets an error on a call, that will trigger a failover.

12 If the watchdog value changes, this will be reflected in the PI archive. But if debugging is set to 128 (or >128, since the debug value is additive), messages such as these will be seen, as well as many others: Watchdog flipping over Watchdog has error, flipping over Watchdog has bad quality, flipping over

OPC DA Interface Failover Manual 13 Chapter 4. Interface-Level Failover Using Microsoft Clustering

Microsoft Clustering (MSCS) is required for non UniInt interface-level failover. This involves two copies of the interface running on cluster nodes. These are referred as primary and backup interfaces. The primary interface is the one that actively gets data from the OPC Server and sends data to the PI System. The backup interface can be controlled as to whether it connects to the OPC server, creates groups active or inactive, and adds tags to the groups. In any case, only the active interface signs up for exceptions with the OPC server or requests data, and only the active interface sends data to PI. These features avoid over-burdening the OPC server or the backend data source with duplicate requests for data by the redundant pair of interfaces. With MSCS, failover is managed by creating resource groups which contain cluster resources. At any given point only one of the 2 cluster nodes has possession of a resource group. By “possession of a resource group” it is meant the resources belonging to that group are running on the node. The primary cluster group contains the resource for the physical devices that are required by the cluster. Other groups can be created and populated with other resources. OSI provides a cluster resource of type Generic Service called Apionline, which may be used as the cluster resource which the interface will watch. At startup each interface will check whether the designated resource is running on its node, and if it is, that interface is the active interface. The overriding rule is that whichever node currently owns the resource that is the node that has the active interface. The following table lists the parameters that are concerned with MSCS interface-level failover. The five parameters that are used for clustered failover are listed below. See the following sections for further information on them.

Parameter Description /PR=# Specifies that this is the primary (/PR=1) or backup (/PR=2) node. /RN=# Specifies the number used for the matching Apionline and resource. /CM=# Specifies the Cluster Mode, whether the interface has a bias toward running on the primary node (/CM=0) or no bias (/CM=1). /CN=tagname Specifies a PI string tag that receives the name of the node that is currently active. Optional. /FM=# Failover Mode. Selects behavior for the backup interface. Chilly (/FM=1), Cool (/FM=2) or Warm (/FM=3)

OPC DA Interface Failover Manual 14 Choosing a Cluster Mode

In Cluster Mode 0 (/CM=0), one interface is designated the Primary interface, and this interface will always be the active interface if at all possible. That means that if the other interface is currently the active interface and the interface on the primary node is started, the primary interface will move the resource onto its node so that it will be the active interface. It is not possible to have the interface on the primary node up and running and connected to an OPC server, and still have the interface on the backup node be the active interface. Cluster Mode 0 has a bias toward keeping the primary node the active node. With Cluster Mode 1 (/CM=1), the interface does not attempt to control which node is the active node, and whichever node is currently the active node will remain the active node until there is a problem that causes a failover or until human intervention occurs, to either move the resource or shut down the active interface. Cluster Mode 1 has no bias at all.

Failover Mode

The Failover Mode determines what the backup interface does with respect to connecting to an OPC server, creating groups, and adding tags. The more of this that is done already, the faster failover is, but there may be a cost in loading up the OPC server or the underlying data system. So choose a Failover Mode based on how long it takes to start getting data after a failover, and whether one or more Failover Modes puts an unacceptable load on the OPC server or data system. Since this will vary widely from server to server and site to site, the best choice may have to be determined by trial and error, or by checking the OPC server documentation, or asking the OPC server vendor.

OPC DA Interface Failover Manual 15 Interface-Level Failover Using Microsoft Clustering

/FM=1 Chilly failover: connect to the server but do not create groups. This is the slowest failover mode, and is not recommended for use with servers where adding tags to groups takes a long time. However, it is very unlikely for this mode to put any measurable load on the server or the underlying data system.

/FM=2 Cool failover: connect to the server, create groups inactive, and add tags. This will work well for servers that do not use any resources for inactive groups. Since this is outside the scope of the OPC standard, verify that the server is not loaded down by having inactive groups. Some servers will use resources to keep track of items in inactive groups due to the requirements of their particular data system. This mode may involve some small delay at failover, as the server is not required to keep the current value for the items, and at failover the server may have to acquire the current value for all of the items at the same time.

/FM=3 Warm failover: connect to the server, create groups active, add tags, but do not advise the groups. This is the same as the old default behavior. Some servers use minimal resources for handling this mode, particularly servers that have the data already. Other servers that have to acquire the data specifically to comply with the OPC standard may be overloaded by this setting. This is the fastest failover mode, since at failover the interface simply requests that the server start sending data and the server will already have the current values available.

How It Works

Apionline Apionline is a very simple program that exists only to watch for the interface. The name of the interface service that a given Apionline is to watch is specified, and the command line of that interface tells it what copy of Apionline to look for.

16 If the interface sees that copy of Apionline running, that means that this is the active node. The cluster manager will start Apionline on the node that currently owns the resource group. If Apionline does not see the designated copy of the interface running, Apionline will shut down. That causes the cluster manager to start up Apionline on the other node of the cluster, which will notify the interface on that other node that it is now the active interface. Apionline and the interface act together; Apionline cannot run if the interface is not running, and the interface will only collect data if Apionline is running. MSCS is responsible for starting and stopping Apionline, either automatically or in response to human intervention making changes with Cluster Administrator. However, the interface itself will shut down Apionline if it cannot collect data or if Cluster Mode 0 is being used and the backup node is currently the active node. Both the primary and the backup interface, running in either mode, will shut down their local copy of Apionline if the interface is unable to collect data. However, only the primary interface, running in Cluster Mode 0, will shut down the copy of Apionline running on the backup node in order to take over as the active interface. Note the same name must be used for the interface on both nodes, since Apionline will be using the same parameters whichever node it runs on, and one of those parameters is the name of the service to watch. So if the interface is installed to a directory called OPC1 on the first node, it must be installed to the directory OPC1 on the second node. What matters is the name of the actual directory that the interface is installed into. The path to the directory can differ. For example, one copy of the interface could use: c:\Program Files\PIPC\Interfaces\OPC1 on one node, and on the other could use: d:\OSI\PIPC\Interfaces\OPC1 and the failover configuration will still work. For clarity, OSIsoft recommends the same naming and placement scheme be used across all the systems.

Configuring APIOnline

With the interface installation, two files are installed in the interface directory: Apionline.bat and Apionline.exe. If the default directory was used for the installation, the interface directory is OPCInt, and this is also the name of the interface executable file, and the name that was used to install the interface as a service. In this case, no files need to be edited. If you examine the services in the Control Panel, you see one called OPC for PI. The parameter in Apionline.bat must match the name of the OPC DA interface service as seen in the list of services. Next, the Apionline service must be installed with this command: Apionline –install –depend tcpip

Multiple Interfaces

To use multiple instances of the interface, copies of Apionline.exe and Apionline.bat should be made for each interface instance, with a unique integer appended to the name (apionoline1, Apionline2, etc.). Apionline by itself without the number suffix is acceptable too. Any non-negative integer can be used, but it should match the number

OPC DA Interface Failover Manual 17 Interface-Level Failover Using Microsoft Clustering

specified to the interface with the /RN=# parameter. Creating multiple copies of the interface executable is suggested and appending the same number to them, resulting in the following: Apionline1.exe, Apionline2.exe, Apionline3.exe Apionline1.bat, Apionline2.bat, Apionline3.bat and opcint1.exe, opcint2.exe, opcint3.exe This makes it simpler when you are tracking problems or load balancing. Also, it helps if separate ID numbers (/ID=#) are used for each copy of the interface, as that shows up in the pipc.log file and makes reading the log file much easier. You must edit the Apionline.bat file to set the correct names for Apionline# and opcint#. For example, with the above 3 instances of the interface, the following would be in Apionline1.bat: Apionline1.exe /proc=opcint1 In opcint1.bat the other parameters would be: /RN=1 /ID=1 Likewise, Apionline2.bat would have: Apionline2.exe /proc=opcint2 In opcint2.bat the other parameters would be: /RN=2 /ID=2 And Apionline3.bat would have: Apionline3.exe /proc=opcint3 In opcint3.bat the other parameters would be: /RN=3 /ID=3 Note that the ID has to match Location1 of all tags for the interface, and having the ID match the resource number is a suggestion, not a requirement. The interface will work correctly if the same ID is used for all three copies of the interface, but reading the pipc.log file will be considerably harder. After the files have been made, these resources should be installed as services on the Interface Node as follows: Apionline# -install –depend tcpip So for the example here all three copies of Apionline would be installed as: Apionline1 –install –depend tcpip Apionline2 –install –depend tcpip Apionline3 –install –depend tcpip Finally, running multiple instances of the interface on each node in the cluster requires creating a uniquely named resource for each pair of redundant interfaces. Each resource should be created in its own uniquely named group. Since all instances of interfaces will have a slightly different timing sequence in connecting to the server, they cannot be properly supported for failover if all of them share the same resource. MSCS moves the resources by the group: resources can be configured to not affect the group failover, but once failover takes place, they move together as a group. Having separate resources and groups also allows for

18 specific arrangements for load balancing, for instance having one active interface on each node, so both nodes are lightly loaded as long as both nodes are fully functional, but having the ability for one node to take the full workload if the other node fails. So, for the above case where there are three copies of the interface running on each node, three resource groups would be created, and each group would have one copy of Apionline as its resource. Follow the directions in Group and Resource Creation three times, once for each copy of the interface, to set up the three groups and resources.

OPCStopStat and Failover

No OPCStopStat is written when an interface fails over or fails back. To track which interface is active, use the /CN parameter to have the interface write the Current Node to a PI tag, whenever it changes. At interface shutdown, if the interface is the active interface, the digital state specified for OPCStopStat will be written to all tags. The writing of OPCStopStat at interface shutdown can be avoided if Cluster Mode 1 is being used, by moving the resource to the other node before shutting down the interface.

Checklist for Cluster Configuration

OSIsoft recommends that you verify configuration step by step. The following is a simple configuration procedure that will help you identify any problems quickly. This is written for just one copy of the interface on each node. If multiple copies are being configured, only the first five steps need to be done for the first copy of the interface that is tested. When it says “matching” below, it means when working with opcint3.exe, look for apionline3.exe, and the apionline3 service and resource. 1. Configure the interface on each node with a dummy pointsource, one that is not currently used by any tags, or with a pointsource and ID number that do not match the pointsource and Location1 pair of any tags. The idea is to bring up both interfaces with no tags at all. Give them the correct Server and Host, but do not configure any failover-related parameters. 2. Start both interfaces, check the pipc.log to verify that both of them come up completely and with no tags. If there are errors at this point, they are probably permission or DCOM problems. However, you must correct any errors reported in the pipc.log before continuing with the next step. 3. Use Cluster Administrator to bring the matching cluster resource online by selecting the matching cluster group. Then right-click the resource and select Bring Online. Use the Task Manager to see that the matching Apionline process is running on the node that Cluster Manager says owns the resource. That node will be called the OriginalOwner. 4. Still using Cluster Administrator, fail over the resource by selecting Initiate Failure in the right-click menu of the resource. The resource state should go to Failed and then Online Pending and then Online, with the other node now the owner. Depending on the system, the intermediate states may not be seen, but the resource should end up Online with the other node as the owner. If not, there is a configuration problem and it must be corrected before continuing the test.

OPC DA Interface Failover Manual 19 Interface-Level Failover Using Microsoft Clustering

5. Use the Task Manager to verify that the matching Apionline on the OriginalOwner node is no longer running and that the matching Apionline service is now running on the other node (OriginalBackup node). If all this is correct, move the resource to whichever node will be the primary node. 6. Use Cluster Manager to take the resource Offline, then shut down both copies of the interface. Configure them for production. Remember to reset the pointsource and /ID to the correct values. At this point, Cluster Manager should show the resource offline, but owned by the primary node. If not, move the resource group to the primary node, while leaving the resource itself offline. This can be done by right-clicking the group and selecting Move Group. 7. Bring up the interface on the node that does not currently own the group. The following message should appear in the pipc.log:

Cluster resource not online, state 4, waiting 8. Bring the resource online. The resource should failover to the node where the interface is running. After Apionline is running on the same node as the interface, the following message should appear in the pipc.log:

Cluster Resource apionline1 on this node

or alternatively,

Resource now running on this node 9. Now bring up the other interface. If Cluster Mode 0 is being used, the resource will now failover to the primary node. One of the two messages listed in the last step should appear in the pipc.log on the primary node. The interfaces should now be configured correctly. As a further test try failing over the resource a time or two, and shutting down one interface at a time, just to make sure that the interfaces do what is expected.

20 Configuring the Interface for Cluster Failover

The five parameters that are used for clustered failover are listed in the following table:

Parameter Description /PR=# Specifies that this is the primary (/PR=1) or backup (/PR=2) node. /RN=# Specifies the number used for the matching Apionline and resource. If a resource number is not specified, the interface will not run. If running only one copy of the interface, and a number was not appended to Apionline, /RN=-1 should be used (any negative number will do). /CM=# Specifies the Cluster Mode, whether the interface has a bias toward running on the primary node (/CM=0) or no bias (/CM=1). /CN=tagname Specifies a PI string tag that receives the name of the node that is currently active. (Optional) /SW Failover Mode. Selects behavior for the backup interface. Chilly (/FM=1), Cool (/FM=2) or Warm (/FM=3).

For Cluster Mode 1, it does not matter which node has /PR=1 and which has /PR=2, but one of each is necessary. The following is an example, using Cluster Mode 0, with the primary node as Wallace, and the backup node as Grommit.

One Interface

If using the default directory for installation, the interface is called OPCInt. The following files should be found in the opcint directory: apionline.exe, apionline.bat, opcint.exe, opcint.bat. Group: OPC Group Resource: Apionline, with Service Name Apionline and Start Parameter /proc=opcint apionline.bat: apionline.exe /proc=opcint opcint.bat on Wallace, the primary node: opcint /ps=o /ec=10 /er=00:00:03 /id=1 /df=OPCDBGF ^ /TF=”ccyy/mn/dd hh:mm:ss.000” /SERVER=OSI.HDA.1 /host=widget:5450 ^ /MA=Y /ts=Y /opcstopstat /f=00 :00 :01 /f=00 :00 :01 /f=00 :00 : 01 /f=00 :00 :02 ^ /CM=0 /PR=1 /RN=-1 /FM=3 opcint.bat on Grommit, the backup node: opcint /ps=o /ec=10 /er=00:00:03 /id=1 /df=OPCDBGF ^ /TF=”ccyy/mn/dd hh:mm:ss.000” /SERVER=OSI.HDA.1 /host=widget:5450 ^ /MA=Y /ts=Y /opcstopstat /f=00 :00 :01 /f=00 :00 :01 /f=00 :00 : 01 /f=00 :00 :02 ^ /CM=0 /PR=2 /RN=-1 /FM=3 Note that only the last line of opcint.bat in this example is specifically for the cluster configuration, and the two opcint.bat files are the same except for the /PR=# parameter.

OPC DA Interface Failover Manual 21 Interface-Level Failover Using Microsoft Clustering

Three Interfaces

For each interface, make sure there are an apionline.exe, apionline.bat, the resource and group, opcint.exe and opcint.bat. For interface 1, the files are apionline1.exe, apionline1.bat, opcint1.exe, opcint1.bat Group: OPC Group1 Resource: Apionline1, with Service Name Apionline1 and Start Parameter /proc=opcint1 apionline1.bat: apionline1.exe /proc=opcint1 opcint1.bat on Wallace, the primary node: opcint1 /ps=o /ec=10 /er=00:00:03 /id=1 /df=OPCDBGF ^ /TF=”ccyy/mn/dd hh:mm:ss.000” /SERVER=OSI.HDA.1 /host=widget:5450 ^ /MA=Y /ts=Y /opcstopstat /f=00 :00 :01 /f=00 :00 :01 /f=00 :00 : 01 /f=00 :00 :02 ^ /CM=0 /PR=1 /RN=1 /FM=3 opcint1.bat on Grommit, the backup node: opcint1 /ps=o /ec=10 /er=00:00:03 /id=1 /df=OPCDBGF ^ /TF=”ccyy/mn/dd hh:mm:ss.000” /SERVER=OSI.HDA.1 /host=widget:5450 ^ /MA=Y /ts=Y /opcstopstat /f=00 :00 :01 /f=00 :00 :01 /f=00 :00 : 01 /f=00 :00 :02 ^ /CM=0 /PR=2 /RN=1 /FM=3 Note that only the last line of opcint1.bat in this example is specifically for the cluster configuration, and the two opcint1.bat files are the same except for the /PR=# parameter. Next, for interface 2, Group: OPC Group2 Resource: Apionline2, with Service Name apionline2 and Start Parameter /proc=opcint2 apionline2.bat : apionline2.exe /proc=opcint2 opcint2.bat on Wallace, the primary node: opcint2 /ps=o /ec=10 /er=00:00:03 /id=2 /df=OPCDBGF ^ /TF=”ccyy/mn/dd hh:mm:ss.000” /SERVER=OSI.HDA.1 /host=widget:5450 ^ /MA=Y /ts=Y /opcstopstat /f=00 :00 :01 /f=00 :00 :01 /f=00 :00 : 01 /f=00 :00 :02 ^ /CM=0 /PR=1 /RN=2 /FM=3 opcint2.bat on Grommit, the backup node: opcint2 /ps=o /ec=10 /er=00:00:03 /id=2 /df=OPCDBGF ^ /TF=”ccyy/mn/dd hh:mm:ss.000” /SERVER=OSI.HDA.1 /host=widget:5450 ^ /MA=Y /ts=Y /opcstopstat /f=00 :00 :01 /f=00 :00 :01 /f=00 :00 : 01 /f=00 :00 :02 ^ /CM=0 /PR=2 /RN=2 /FM=3 Note that the /ID was changed to match the resource number /RN.

22 Finally, for interface 3, Group: OPC Group3 Resource: Apionline3, with Service Name Apionline3 and Start Parameter /proc=opcint3 apionline3.bat: apionline3.exe /proc=opcint3 opcint3.bat on Wallace, the primary node: opcint3 /ps=o /ec=10 /er=00:00:03 /id=3 /df=OPCDBGF ^ /TF=”ccyy/mn/dd hh:mm:ss.000” /SERVER=OSI.HDA.1 /host=widget:5450 ^ /MA=Y /ts=Y /opcstopstat /f=00 :00 :01 /f=00 :00 :01 /f=00 :00 : 01 /f=00 :00 :02 ^ /CM=0 /PR=1 /RN=3 /FM=3 opcint3.bat on Grommit, the backup node: opcint3 /ps=o /ec=10 /er=00:00:03 /id=3 /df=OPCDBGF ^ /TF=”ccyy/mn/dd hh:mm:ss.000” /SERVER=OSI.HDA.1 /host=widget:5450 ^ /MA=Y /ts=Y /opcstopstat /f=00 :00 :01 /f=00 :00 :01 /f=00 :00 : 01 /f=00 :00 :02 ^ /CM=0 /PR=2 /RN=3 /FM=3

Buffering Data on Cluster Nodes

Buffering is fully supported on cluster nodes. In order to take advantage of buffering, install bufserv.exe on all participating cluster nodes at the time of PI API installation. No special configurations are required to enable buffering on a cluster node. It should be noted that there is a risk of incurring a substantial amount of out-of-order data in the scenario where a failover occurs at a time when both interfaces are disconnected from PI (thus buffering data). Upon reconnection each cluster node will send buffered data simultaneously, which will result in out-of-order data. This will cause the PI server to increase resource consumption, particularly the PI Archive Sub-system, as it attempts to process these out-of-order events.

Group and Resource Creation Using Cluster Administrator

Before taking this step, make sure that MSCS is installed and configured. Test and verify that Clustering is functioning correctly prior to creating groups and resources for OPCInt interface failover. At the end of this section are steps for verifying correct cluster configuration. Directions specified here are for using the resources with Cluster Mode 0, with the assumption being that if using Cluster Mode 1, the installer has enough information to decide the proper settings for the configuration. Cluster Mode 1 allows much more control over the cluster resource, but the possibilities and considerations for cluster control are beyond the scope of this document.

Cluster Group Configuration

Note: Interfaces must not be run under the Local System account if using Cluster Failover. The service must be set up to run under an account that has administrator privileges.

OPC DA Interface Failover Manual 23 Interface-Level Failover Using Microsoft Clustering

Installation of Cluster Group 1. Click Start>Programs>Administrative Tools (Common)>Cluster Administrator. 2. Click File>New>Group. Enter the name of the group and description.

3. Click Next. Do not add anything to the Preferred owner’s box, since owner preference is built into the interface for Cluster Mode 0. Below, Grommit and Wallace are the cluster nodes.

4. Click Finish. Right-click the group just created and select Properties. 1. Fill out the name of the cluster and the description. Do not select the Preferred owners since these are the nodes on which the group is preferred to run. Preferred ownership is built into the interface when using Cluster Mode 0, and therefore should not be set using from the Cluster Administrator.

24 2. Set the Threshold and Period as follows. Threshold is the maximum number of times that a group is allowed to fail over in the time specified by Period.

OPC DA Interface Failover Manual 25 Interface-Level Failover Using Microsoft Clustering

3. For the Failback tab, select Prevent failback check box, since failback mechanism is also built into the interface when using Cluster Mode 0.

4. Click Apply and then OK.

26 Installation of the Resources

1. Right click the group in Cluster Administrator, select New and then Resource. Type the name of the resource, and description. For the Resource type select Generic Service.

Running this resource in a separate Resource Monitor is not necessary, unless this resource seems to be causing problems or doing so to try and isolate a problem. 2. Click Next and verify that the cluster nodes are in Possible owners list. These are the nodes on which the resource can run, and therefore the nodes onto which the group can failover. 3. Click Next and skip Dependencies. Move on to Generic Service Parameters.

OPC DA Interface Failover Manual 27 Interface-Level Failover Using Microsoft Clustering

4. This resource (in the preceding figure it is called apionline1) should have been installed as a service prior to cluster resource creation by typing in a Windows Command line window, Apionline1 –install –depend tcpip 5. /proc is a parameter that the Apionline resource needs. It identifies the process that Apionline must have running before Apionline itself can be brought online. This parameter should be the name of the opcint service for which this resource is being defined. Click Next and skip Registry Replication. 6. Click Apply and OK. To continue: 7. Right click the resource and then select Properties. In the Advanced tab, set the entries as shown below. Note that MSCS is being instructed to restart the resource, but to fail it over to the other node every time. This means that when Apionline shuts down because its interface is not running, or if the primary interface (running in Cluster Mode 0, with a bias toward the primary node) shuts down Apionline on the backup node, MSCS will first move ownership of the resource to the other node before restarting it.

8. Click Apply and then OK. Repeat the group and resource creation process for each instance of the interface on the node. Next the interface is ready to be configured.

28 Logfile Messages for Interface-Level Failover

The messages sent to the pipc.log will vary somewhat depending on what the cluster does. In general, any time the interface detects that something has changed (ownership of a resource has shifted over to another node), it will generate a message. When the interface first connects to the cluster, it will check for the cluster resource to be online. If it has trouble connecting, or if the resource is offline, some of these messages may be seen: Failed to open cluster: error %d. Will try again in 5 seconds. Failed to open cluster resource %s: error %d. Will try again in 5 seconds. Cluster resource not online, state %d, waiting It will keep trying until it succeeds. The “Failed to open” messages will repeat, since if it fails to open the cluster, there probably is a problem with the cluster and the interface should not wait indefinitely. If the resource is not online, it will wait indefinitely. It is assumed that the intent was for the resource to be offline. After it is connected, one of these messages should be generated: Cluster Resource %s on this node Cluster Resource %s NOT on this node After that, this message might be generated: Failed to get group handle: error %d. Will try again in 5 seconds. Again, that message will repeat if the condition persists, since the configuration is still corrupted. Finally, the last possibility for failure before going into steady-state mode would be the message: Error creating cluster port: %lX. Failover disabled. This one is fatal, as far as failover is concerned. If this message is received contact OSIsoft Technical Support. After everything is running properly the following messages will be generated as the cluster resource moves from one node to the other: Resource now running on this node Resource no longer running on this node Group owned by this node Group NOT owned by this node If more messages are required, set the /DB=128 using a debug flag tag that is defined with /DF (see the OPC DA Interface Manual for more information). This setting will cause the interface to display more messages when something happens, for instance if the resource state changes but the owner does not change.

OPC DA Interface Failover Manual 29 Chapter 5. Using Combination of Server- and Interface-Level Failover

The interface can be configured using a combination of server-level and interface-level failover. Only one type of interface-lever failover is allowed to be combined with the server- level failover per configuration. If the interface-level failover based on UniInt is used, the Microsoft clustering based failover will be disabled automatically. The easiest way to configure interfaces for both kinds of failover is to configure them for interface-level failover first, and verify that it is working properly. Next, remove those parameters from the opcint.bat files, and configure and test each interface server-level failover separately. Once the server-level failover working properly, add the interface-level failover parameters back into the opcint.bat files.

Note: If the server-level failover is combined with the interface-level failover based on UniInt, the control points required for this type of failover should be created on the underlying data source, that is, DCS, PLC, and other. It is important that both primary and backup OPC Servers share the same control points. This will prevent faulty behavior of the interfaces when the primary OPC Server becomes unavailable.

OPC DA Interface Failover Manual 30 Appendix A. Technical Support and Resources

You can read complete information about technical support options, and access all of the following resources at the OSIsoft Technical Support Web site: http://techsupport.osisoft.com (http://techsupport.osisoft.com)

Before You Call or Write for Help

When you contact OSIsoft Technical Support, please provide:  Product name, version, and/or build numbers  Computer platform (CPU type, operating system, and version number)  The time that the difficulty started  The log file(s) at that time

Help Desk and Telephone Support

You can contact OSIsoft Technical Support 24 hours a day. Use the numbers in the table below to find the most appropriate number for your area. Dialing any of these numbers will route your call into our global support queue to be answered by engineers stationed around the world.

Office Location Access Number Local Language Options San Leandro, CA, USA 1 510 297 5828 English Philadelphia, PA, USA 1 215 606 0705 English Johnson City, TN, USA 1 423 610 3800 English Montreal, QC, Canada 1 514 493 0663 English, French Sao Paulo, Brazil 55 11 3053 5040 English, Portuguese Frankfurt, Germany 49 6047 989 333 English, German Manama, Bahrain 973 1758 4429 English, Arabic Singapore 65 6391 1811 English, Mandarin 86 021 2327 8686 Mandarin Perth, WA, Australia 61 8 9282 9220 English

OPC DA Interface Failover Manual 31 Technical Support and Resources

Support may be provided in languages other than English in certain centers (listed above) based on availability of attendants. If you select a local language option, we will make best efforts to connect you with an available Technical Support Engineer (TSE) with that language skill. If no local language TSE is available to assist you, you will be routed to the first available attendant. If all available TSEs are busy assisting other customers when you call, you will be prompted to remain on the line to wait for the next available TSE or else leave a voicemail message. If you choose to leave a message, you will not lose your place in the queue. Your voicemail will be treated as a regular phone call and will be directed to the first TSE who becomes available. If you are calling about an ongoing case, be sure to reference your case number when you call so we can connect you to the engineer currently assigned to your case. If that engineer is not available, another engineer will attempt to assist you.

Search Support

From the OSIsoft Technical Support Web site, click Search Support. Quickly and easily search the OSIsoft Technical Support Web site’s Support Solutions, Documentation, and Support Bulletins using the advanced MS SharePoint search engine.

Email-based Technical Support

[email protected] When contacting OSIsoft Technical Support by email, it is helpful to send the following information:  Description of issue: Short description of issue, symptoms, informational or error messages, history of issue  Log files: See the product documentation for information on obtaining logs pertinent to the situation.

Online Technical Support

From the OSIsoft Technical Support Web site, click Contact us > My Support > My Calls. Using OSIsoft’s Online Technical Support, you can:  Enter a new call directly into OSIsoft’s database (monitored 24 hours a day)  View or edit existing OSIsoft calls that you entered  View any of the calls entered by your organization or site, if enabled  See your licensed software and dates of your Service Reliance Program agreements

32

Remote Access

From the OSIsoft Technical Support Web site, click Contact Us > Remote Support Options. OSIsoft Support Engineers may remotely access your server in order to provide hands-on troubleshooting and assistance. See the Remote Access page for details on the various methods you can use.

On-site Service

From the OSIsoft Technical Support Web site, click Contact Us > On-site Field Service Visit. OSIsoft provides on-site service for a fee. Visit our On-site Field Service Visit page for more information.

Knowledge Center

From the OSIsoft Technical Support Web site, click Knowledge Center. The Knowledge Center provides a searchable library of documentation and technical data, as well as a special collection of resources for system managers. For these options, click Knowledge Center on the Technical Support Web site.  The Search feature allows you to search Support Solutions, Bulletins, Support Pages, Known Issues, Enhancements, and Documentation (including user manuals, release notes, and white papers).  System Manager Resources include tools and instructions that help you manage: Archive sizing, backup scripts, daily health checks, daylight savings time configuration, PI Server security, PI System sizing and configuration, PI trusts for Interface Nodes, and more.

Upgrades

From the OSIsoft Technical Support Web site, click Contact Us > Obtaining Upgrades. You are eligible to download or order any available version of a product for which you have an active Service Reliance Program (SRP), formerly known as Tech Support Agreement (TSA). To verify or change your SRP status, contact your Sales Representative or Technical Support ( http://techsupport.osisoft.com /) for assistance.

OPC DA Interface Failover Manual 33 Appendix B. Revision History

Date Author Comments 4-Dec-2002 LACraven Created using skeleton 1.11 28-Apr-2003 LACraven Added multiple watchdog tags 1-Nov-2005 Janelle Version 2.2.2.0, Rev: A: fixed headers and footers, fixed copyright, fixed section breaks 13-Apr-2006 Janelle Version 2.2.3.0, Updated manual to include latest version, fixed section breaks, removed tracking, fixed headers and footers, and updated Table of Contents; removed Bookmarks. 12-Jun-2006 Amaksumov Version 2.3.1.0, Updated manual to include UniInt Failover section. Changes in the document structure. Some changes to the context. Updated hardware connection diagrams. 15-Jun-2006 Mkelly Version 2.3.1.0, Rev A, Removed all first person references. Updated the TOC, fixed headers and footers, fixed section breaks. 26-June-2006 Amaksumov Version 2.3.2.0, Added section – Command Line Parameter Considerations. 28-Jun-2006 Mkelly Made some minor grammatical changes, fixed headers. 19-Jul-2006 Amaksumov Version 2.3.2.0, Rev A, Removed all references to /UFO_Interval parameter for UniInt based Interface-Level Failover 28-Jul-2006 Amaksumov Added additional tag attributes (excmax, excmin, excdev, excdevpercent, compressing) that are required for configuration of the UniInt based failover control tags 27-Oct-2006 Amaksumov Version 2.3.3.0, Changed the version number. 28-Oct-2006 Mkelly Version 2.3.3.0, Rev A; Fixed page setup margins, tab in headers, made tables fit within margins. 14-Nov-2006 Mgrace PLI# 9722OSI8 Add section about Buffering in the Cluster Failover section. 29-Nov-2006 Mgrace Fix /wq=x descriptions. 14-Feb-2007 Mgrace Version 2.3.4.0 19-Feb-2007 Mkelly Version 2.3.4.0, Rev B; Added new ICU screenshots showing PI ICU 1.4.1.0 (PR1) layouts. 17-Apr-2007 Mgrace Updated the version to 2.3.5.0 20-Jun-2008 Lcraven Changed name to OPC DA Interface Failover Manual; updated version to 2.3.8.0 19-Sep-2008 Lcraven Updated the version to 2.3.9.0 3-Feb-2009 Lcraven Removed Uniint failover documentation, updated ICU screenshots, updated version

OPC DA Interface Failover Manual 34 Date Author Comments 20-Feb-2009 Mkelly Version 2.3.10.0, Revision A; Updated ICU screenshot and description for /WQ under Server Level Failover. 3-Jun-2009 MKelly Version 2.3.11.0 Change all OPC Interface references to OPC DA Interface. Changed version number for 2.3.11.0 release. Updated title and contact pages. Fixed hyperlinks. 14-Sep-2010 Sbranscomb Version 2.3.11.0 Revision A; Updated to Interface Skeleton Version 3.0.28. 19-Jan-2011 EKuwana Updated the version to 2.3.14.0; Revision A 20-Jul-2011 MKelly Version 2.3.14.0-2.3.17.x; Upped version number for rebuild with UniInt 4.5.2.0

OPC DA Interface Failover Manual 35

Recommended publications