Managing Reference Data Risk: a Best Practice Approach
Total Page:16
File Type:pdf, Size:1020Kb
WHITE PAPER Managing reference data risk: A best practice approach Philip Filleul Solutions Manager - Asset Mgmt. Fred Cohen Group Vice President - Asset Mgmt. Executive Summary In recent years, the twin pressures of achieving regulatory compliance and straight through processing (STP) have highlighted substantial redundancy and duplication of effort in the area of reference data management. Many buy-side and sell-side firms are uncomfortably aware of both the time and cost they devote to purchasing, cleansing, and distributing reference data, as well as the risks that arise when these tasks are not performed effectively, among them failed trades and lost revenue opportunities. Firms are now questioning whether or not there is sufficient value and adequate technology support to consolidate all their market data management and counterparty systems on a single platform. As a best practice market data and counterparty data should be acquired as a utility and mapped to reference data through cross-referencing. But a robust reference data management platform must underpin such efforts if they are to succeed. Recognizing that there is little competitive differentiation to be derived from managing reference data, many firms are turning to third-party specialist organizations to reduce unnecessary costs and operational risks while improving data quality. Best-practice reference data management may involve leveraging third-party specialist organizations, but only when the purchase, cleaning, and distribution risk points in the process have been considered. Firms first need to understand the gap that exists between their current reference data management processes and newer, best-practice approaches. Managing reference data risk: A best practice approach 02 When inconsistencies or inaccuracies in reference data arise, exceptions in the trade lifecycle occur, leading to increased operational risk, lost revenue opportunities, and financial liabilities. Business impact of poor data quality−Increased risk and unnecessary cost In the rush to improve efficiency with straight-through processing (STP) while adhering to regulatory compliance, have firms failed to pay sufficient attention to the risks associated with poor data quality? Based on the current market trends this certainly seems to be the case. According to a market research report from Tower Group (a leading analyst firm), in spite of the millions of dollars invested by financial services companies in middleware upgrades to integrate key trading systems, one in ten trades still fail on first settlement attempt. When that occurs, the chief culprits are inaccurate or inconsistent reference data and poor data management processes. These problems, says TowerGroup analysts, account for some 60% of failed trades. Duplicate reference data purchases result in unnecessary cost While market data provides real-time pricing information that is constantly changing, reference data is the static information used in the administration of a securities transaction. Reference data may be static but it covers a very broad set of data, encompassing historical pricing information (such as an end-of-day price on a given date), security identifier codes, the exchange on which a security trades, names and addresses of counterparties, and details of corporate actions, such as stock splits or proxy votes. All large financial firms access data from a variety of sources where disparate and siloed data systems and operations are the norm. To add to the complexity, reference data is typically sourced from a range of internal and external providers and fed into data silos scattered around the organization for consumption by employees in different departments. At the same time, mergers and acquisitions in the financial sector are leading some companies to hold even more sets of reference data in silos around their business. In other words, there is no single version of the truth present in the organization and reference data purchases are often duplicated, resulting in unnecessary cost. Inconsistent reference data increases operational risk When inconsistencies or inaccuracies in reference data arise, exceptions in the trade lifecycle occur, leading to increased operational risk, lost revenue opportunities, and financial liabilities. That is even more true in a period of market volatility when firms feel the need for expensive and time-consuming manual trade duplication and reconciliation processes. Given its vital role in processes such as portfolio valuation and NAV (net asset value) calculations, it is clear that when sourced and managed in this way, reference data is a major factor in increasing operational risk. Managing reference data risk: A best practice approach 03 Real-world example: Business impact of inconsistent reference data Take for example, the experience of one sell-side bank that was recently forced to close out a hedge fund. While looking to sell the collateral it had taken out against positions held in that fund, it discovered that while front-office traders were working from one set of pricing data, its risk management staff had a completely different version. These two different sets of reference data caused a lot of confusion and unnecessary cost, ultimately forcing the fund's closure due to business losses. Firms should closely examine whether the additional costs of the more complex platforms and the cost of the actual tick data storage provide sufficient benefits when compared with less expensive securities reference data management platforms and tick data history utilities. Consolidating market data management and counterparty systems The reference data management problem is shared by both the buy-side and the sell-side. Each has to worry about securities reference data to settle trades reliably and to provide fair valuations. The sell-side firms are now beginning to consider market data and counterparty data as part of the larger reference data problem. Firms that had invested in market data management and counterparty systems that were separate from the securities reference data management platforms are now questioning whether or not there is sufficient value and adequate technology support to consolidate all these sytems on a single platform. Let us examine the issues with integration of market data management and counterparty systems: 1. The market data problem Trading firms have for a long time been concerned with sourcing accurate market data on a real time basis with absolute minimum latency in order to feed algorithmic trading engines. Latencies are measured in microseconds and volumes in millions of messages per second, which is very different to the volume and speed characteristics of traditional reference data management. Only the largest Tier 1 firms can afford the luxury of storing their own tick histories, but for the majority, there are cost effective industry utility services such as Tickdata and Reuters DataScope Tick History, which provide clean tick history on demand. These can be used for proof of best execution and algorithmic back testing. There are providers of reference data management platforms who can integrate market data into a business process when provided as a feed to distribute, but cannot manage it in a real-time scenario. Firms should closely examine whether the additional costs of the more complex platforms and the cost of the actual tick data storage provide sufficient benefits when compared with less expensive securities reference data management platforms and tick data history utilities. Managing reference data risk: A best practice approach 04 2. The counterparty data problem Many firms have invested heavily in counterparty data management. However, through a spate of recent acquisitions, organizational, geographic, and functional silos have developed resulting in multiple databases in different formats. Fragmentation can go as far as having different counterparty databases even within a group, depending on the instruments concerned. Counterparty data is often viewed by firms as 'proprietary' and relatively static. In fact, it is neither. For one thing, it is either right or wrong and the 'right' data can be reliably sourced as a utility, for example, from Avox or Dun & Bradstreet. For another, around 20% of it changes from year to year as a result of M&A and corporate restructuring activities. For a firm, there is undoubtedly a benefit from integrating counterparty data in one place. Now that industry utilities exist where clean data can be reliably sourced, it is no longer a proprietary advantage. The advantage would stem from the uniform usage of that data source throughout the firm. In both cases, our view is that for most firms, market data and counterparty data should be acquired as a utility and mapped to reference data through cross-referencing. But a robust reference data management platform must underpin such efforts if they are to succeed. Market data and counterparty data should be acquired as a utility and mapped to reference data through cross-referencing. But a robust reference data management platform must underpin such efforts if they are to succeed. Managing multiple risk points for efficient reference data management The above considerations argue strongly for the concept of a 'Golden Copy' of reference data to serve everyone in the company. In risk terms, the benefits of a more centralized strategy for the management of reference data include: • Reduced trade failures • Reduced costs related to manual rekeying of data and