WHITE PAPER

Managing reference data risk: A best practice approach

Philip Filleul Solutions Manager - Asset Mgmt.

Fred Cohen Group Vice President - Asset Mgmt. Executive Summary

In recent years, the twin pressures of achieving regulatory compliance and straight through processing (STP) have highlighted substantial redundancy and duplication of effort in the area of reference data management. Many buy-side and sell-side firms are uncomfortably aware of both the time and cost they devote to purchasing, cleansing, and distributing reference data, as well as the risks that arise when these tasks are not performed effectively, among them failed trades and lost revenue opportunities.

Firms are now questioning whether or not there is sufficient value and adequate technology support to consolidate all their management and counterparty systems on a single platform. As a best practice market data and counterparty data should be acquired as a utility and mapped to reference data through cross-referencing. But a robust reference data management platform must underpin such efforts if they are to succeed.

Recognizing that there is little competitive differentiation to be derived from managing reference data, many firms are turning to third-party specialist organizations to reduce unnecessary costs and operational risks while improving data quality.

Best-practice reference data management may involve leveraging third-party specialist organizations, but only when the purchase, cleaning, and distribution risk points in the process have been considered. Firms first need to understand the gap that exists between their current reference data management processes and newer, best-practice approaches.

Managing reference data risk: A best practice approach 02 When inconsistencies or inaccuracies in reference data arise, exceptions in the trade lifecycle occur, leading to increased operational risk, lost revenue opportunities, and financial liabilities.

Business impact of poor data quality−Increased risk and unnecessary cost

In the rush to improve efficiency with straight-through processing (STP) while adhering to regulatory compliance, have firms failed to pay sufficient attention to the risks associated with poor data quality? Based on the current market trends this certainly seems to be the case.

According to a market research report from Tower Group (a leading analyst firm), in spite of the millions of dollars invested by companies in middleware upgrades to integrate key trading systems, one in ten trades still fail on first settlement attempt. When that occurs, the chief culprits are inaccurate or inconsistent reference data and poor data management processes. These problems, says TowerGroup analysts, account for some 60% of failed trades.

Duplicate reference data purchases result in unnecessary cost While market data provides real-time pricing information that is constantly changing, reference data is the static information used in the administration of a securities transaction. Reference data may be static but it covers a very broad set of data, encompassing historical pricing information (such as an end-of-day price on a given date), identifier codes, the exchange on which a security trades, names and addresses of counterparties, and details of corporate actions, such as stock splits or proxy votes.

All large financial firms access data from a variety of sources where disparate and siloed data systems and operations are the norm. To add to the complexity, reference data is typically sourced from a range of internal and external providers and fed into data silos scattered around the organization for consumption by employees in different departments.

At the same time, mergers and acquisitions in the financial sector are leading some companies to hold even more sets of reference data in silos around their business. In other words, there is no single version of the truth present in the organization and reference data purchases are often duplicated, resulting in unnecessary cost.

Inconsistent reference data increases operational risk When inconsistencies or inaccuracies in reference data arise, exceptions in the trade lifecycle occur, leading to increased operational risk, lost revenue opportunities, and financial liabilities. That is even more true in a period of market volatility when firms feel the need for expensive and time-consuming manual trade duplication and reconciliation processes.

Given its vital role in processes such as portfolio valuation and NAV (net asset value) calculations, it is clear that when sourced and managed in this way, reference data is a major factor in increasing operational risk.

Managing reference data risk: A best practice approach 03 Real-world example: Business impact of inconsistent reference data Take for example, the experience of one sell-side bank that was recently forced to close out a hedge fund.

While looking to sell the collateral it had taken out against positions held in that fund, it discovered that while front-office traders were working from one set of pricing data, its risk management staff had a completely different version.

These two different sets of reference data caused a lot of confusion and unnecessary cost, ultimately forcing the fund's closure due to business losses.

Firms should closely examine whether the additional costs of the more complex platforms and the cost of the actual tick data storage provide sufficient benefits when compared with less expensive securities reference data management platforms and tick data history utilities.

Consolidating market data management and counterparty systems

The reference data management problem is shared by both the buy-side and the sell-side. Each has to worry about securities reference data to settle trades reliably and to provide fair valuations. The sell-side firms are now beginning to consider market data and counterparty data as part of the larger reference data problem.

Firms that had invested in market data management and counterparty systems that were separate from the securities reference data management platforms are now questioning whether or not there is sufficient value and adequate technology support to consolidate all these sytems on a single platform.

Let us examine the issues with integration of market data management and counterparty systems:

1. The market data problem Trading firms have for a long time been concerned with sourcing accurate market data on a real time basis with absolute minimum latency in order to feed algorithmic trading engines. Latencies are measured in microseconds and volumes in millions of messages per second, which is very different to the volume and speed characteristics of traditional reference data management. Only the largest Tier 1 firms can afford the luxury of storing their own tick histories, but for the majority, there are cost effective industry utility services such as Tickdata and Reuters DataScope Tick History, which provide clean tick history on demand. These can be used for proof of best execution and algorithmic back testing.

There are providers of reference data management platforms who can integrate market data into a business process when provided as a feed to distribute, but cannot manage it in a real-time scenario. Firms should closely examine whether the additional costs of the more complex platforms and the cost of the actual tick data storage provide sufficient benefits when compared with less expensive securities reference data management platforms and tick data history utilities.

Managing reference data risk: A best practice approach 04 2. The counterparty data problem Many firms have invested heavily in counterparty data management. However, through a spate of recent acquisitions, organizational, geographic, and functional silos have developed resulting in multiple databases in different formats. Fragmentation can go as far as having different counterparty databases even within a group, depending on the instruments concerned.

Counterparty data is often viewed by firms as 'proprietary' and relatively static. In fact, it is neither. For one thing, it is either right or wrong and the 'right' data can be reliably sourced as a utility, for example, from Avox or Dun & Bradstreet. For another, around 20% of it changes from year to year as a result of M&A and corporate restructuring activities.

For a firm, there is undoubtedly a benefit from integrating counterparty data in one place. Now that industry utilities exist where clean data can be reliably sourced, it is no longer a proprietary advantage. The advantage would stem from the uniform usage of that data source throughout the firm.

In both cases, our view is that for most firms, market data and counterparty data should be acquired as a utility and mapped to reference data through cross-referencing. But a robust reference data management platform must underpin such efforts if they are to succeed.

Market data and counterparty data should be acquired as a utility and mapped to reference data through cross-referencing. But a robust reference data management platform must underpin such efforts if they are to succeed.

Managing multiple risk points for efficient reference data management The above considerations argue strongly for the concept of a 'Golden Copy' of reference data to serve everyone in the company. In risk terms, the benefits of a more centralized strategy for the management of reference data include: • Reduced trade failures • Reduced costs related to manual rekeying of data and duplication of efforts • Reduced operational risk • Reduced number of missed trading opportunities.

Best-practice reference data management may involve outsourcing, but only when the following risk points in the process have been considered:

Risk-Point 1: Purchase The analyst firm, Burton-Taylor International Consulting, contends that organizations worldwide spent just over $23 billion last year in financial information and analysis provided by a wide range of specialist providers including Thomson Reuters (TRI), Bloomberg, Interactive Data Corporation (IDC), FactSet, and SIX Telekurs.

Managing reference data risk: A best practice approach 05 In the case of reference data, it is common to buy multiple sets from multiple vendors, according to the needs and preferences of individual employees and teams within the organization. There is often good reason for this. Each has its strengths and weaknesses, so in the case of a particular instrument category, (for example, fixed income), there may be three or four suitable vendors. Internal politics may also come into play, with disputes over the 'best' (that is, most trustworthy) provider from which to buy reference data for a particular instrument category.

For most organizations there is no single data vendor that has the full range of data coverage or depth of attribute types needed to satisfy all requirements nor is there a clear policy for combining that data into a single, meaningful view. This results in organizations spending money on duplicate, non-optimized data. They are not even sure of what they are buying, or where the data is being used. In some cases, reference data being purchased may not even be used at all, such as when a particular department decides to take data feeds from another source. Contract managers responsible for purchasing the data, however, may be reluctant to turn off feeds that they believe may be redundant because they are not sure if other departments in the organization are using the data feeds.

Best-practice reference data management circumvents these purchase point risks by using advanced tools to track and analyze what data sets are being purchased and where they are being used, tracking the data path from contributor to consumer, and then monitoring by user and data element. Smart companies also turn to trusted independent partners to help them determine what data sources are right for the organization and its employees.

Risk-Point 2: Cleansing Reference data generally is non-optimized at the time of purchase and hence it needs to be cleansed in order to identify inconsistencies and faults. If reference data is held in multiple silos rather than centrally, it is likely to be cleansed multiple times.

There is also the increasing impact of corporate actions on reference data to consider. The volume of corporate actions has ramped up significantly over the past year, yet when it comes to normalizing reference data to reflect the necessary changes they invoke, it has to be carried out across multiple silos.

Some corporate actions are relatively easy to handle, such as splits and dividends. Others, like complex rights issues, require highly labor-intensive intervention from a specialist who understands what is being offered by the corporate action in question. In addition, corporate actions tend to be unexpected and brought into effect quickly, giving those responsible for reference data cleansing little time to respond.

As a best-practice for reference data management, use automated tools that cleanse data by monitoring incoming data and checking for relationships and expected patterns. When exceptions occur, manual intervention may be required−but smart companies use skilled staff in low-cost offshore locations to do this.

For most organizations there is no single data vendor that has the full range of data coverage or depth of attribute types needed to satisfy all requirements nor is there a clear policy for combining that data into a single, meaningful view.

Managing reference data risk: A best practice approach 06 Risk-Point 3: Distribution Once reference data has been bought and cleansed, it needs to be fed to the individual systems that consume it. That requires an in-depth understanding of what data sets and fields each consuming system requires, when the data is needed and in what format it is expected. Those requirements will vary hugely, according to the function the system performs, for example, NAV reporting, compliance reporting, collateral pricing, and so on.

As a best-practice for reference data management, use ETL (extraction, transformation and loading) tools to subset and reformat the data contained in the 'Golden Copy', and send it to each consuming system in a way that it can be understood and used. When a request for a new feed is submitted, skilled personal are on hand to decide which fields from which data sets are required to build it.

Many firms have now realized that there is little competitive advantage to be gained from managing publicly available, highly commoditized reference data in-house.

Leveraging third-party specialist organizations to manage commoditized reference data

Many firms have now realized that there is little competitive advantage to be gained from managing publicly available, highly commoditized reference data in-house. Increasingly, firms are turning to third-party specialist organizations, not only to manage reference data on their behalf, but also to re-architect data systems in such a way that they are outsourcing ready.

Using a combination of best-of-breed tools and skilled resources, the third-party specialist will normalize, cleanse, validate, and aggregate multi-source content to achieve a single, standardized 'Golden Copy' of reference data which is fed back to the client as a managed service. We also need to consider the efficiency gains associated with higher levels of data quality. Reduced trade failures, reduced manual interventions, and reduced data rekeying would lower the operational risk.

Our view is that there has never been a better time for financial firms to leverage third-party specialist organizations to manage commoditized reference data. A number of firms have now emerged that offer the competence and experience to handle reference data on your behalf. It is time for firms to consider whether they should continue to employ in-house staff to manage reference data−in some of the world's most high-cost financial centers−or to entrust it to a provider that offers a more economically appealing alternative.

The risks associated with reference data now are an immediate concern and require immediate action. Best-practice reference data management is critical to current performance and a prerequisite for achieving further growth and efficiency.

Managing reference data risk: A best practice approach 07 About the Authors

Philip Filleul is Solutions Manager for Patni's Reference Data Solution. He has 24 years experience in financial systems having held senior positions with major suppliers to large banks including IBM and Sun Microsystems, focusing in the last few years on risk, compliance, and reference data.

Fred Cohen is Group Vice President and Global Head of Patni's Capital Markets and Investment Banking practice. With over 25 years in the financial markets, Fred comes from a long line of high profile employers and successful implementation of very large scale financial projects.

REGIONAL HEADQUARTERS About Patni AMERICAS United States Patni Computer Systems Ltd. is one of the leading global providers of Information Patni Americas, Inc. Technology services and business solutions. Around 15,000 professionals service clients One Broadway Cambridge, MA 02142. across diverse industries, from 28 international offices across the Americas, Europe and Tel: +1 617-914-8000 Asia-Pacific, and 23 Global Delivery Centers in strategic locations across the world. They Fax: +1 617-914-8200 have serviced more than 400 FORTUNE 1000 companies, for over three decades.

EMEA United Kingdom The vision is to achieve global IT services leadership in providing value-added high Patni Computer Systems (UK) Ltd. quality IT solutions to their clients in selected horizontal and vertical segments, by The Patni Building 264-270, Bath Road combining technology skills, domain expertise, process focus, and a commitment to Heathrow UB3 5JJ. long-term client relationships. Tel: +44 20 8283 2300 Fax: +44 20 8759 9501 Committed to quality, Patni adds value to its clients' businesses through well-established SAARC and structured methodologies, tools and techniques. Patni is an ISO 9001: 2000 certified India and SEI-CMMI Level 5 (V 1.2) organization, assessed enterprise wide at P-CMM Level 3. In © 2010 Patni. All rights reserved. brand names and trademarks belong to their respective owners. Patni Computer Systems Ltd keeping with its focus on continuous process improvements, Patni adopts Six Sigma Ackruti, MIDC Cross Road No 21 Andheri (E), Mumbai 400 093. practices as an integral part of its quality and process frameworks. Tel: +91 22 6693 0500 Fax: +91 22 6693 0211 Patni has reference data management practice expertise in optimizing data architectures,

APAC expertise in implementing market leading reference data management platforms, and a Singapore business process outsourcing group currently delivering manual reference data cleansing Patni (Singapore) Pte Ltd services to a number of major organizations. 61 Robinson Road #16-02 Robinson Centre Singapore 068893. Tel: +65-6602-6600 Fax: +65-6602-6610 D-002_051010

Contact: [email protected] www.patni.com 25+ years in IT Services | 15000+ employees | SEI-CMMI-Dev Level 5 (V 1.2) | ISO 9001:2008