SP - Abnormal Returns Data - the bane of the performance analyst (1)When it comes to measuring and analyzing performance in the asset management industry, the quality of both source and calculated data is of paramount importance.


Ensuring that data is accurate, consistent and in a format that can be easily interpreted and processed by a company’s systems is one of the biggest challenges faced by middle offices – and getting it wrong can be a significant drain on resources. What’s more, time spent cleaning up data issues or trying to establish the source of any problems or erroneous results, means less effort can be devoted to adding value elsewhere in the analysis process.

In a recent study carried out by the Economist Intelligence Unit, more than a third (36%) of executives in the asset management industry said that scrubbing data was one of their top three data-related difficulties, with non-compatible (29%) and incorrect data (20%) also featuring high on the list of common complaints.


Clean but tarnished
Many of these problems relate to the quality of data received from external sources over which the middle office may not be able to exercise a large degree of control. But what about issues that relate to “clean” data – figures which, although technically correct, appear to produce anomalous results in performance analysis?

Download our complimentary Equity Attribution Whitepaper to learn how visual &  interactive equity attribution can add value (opens in a new tab)

One increasingly serious problem faced by analysts is when systems produce abnormal returns – that is, those that appear to mis-state the actual economic performance of the security or other asset being measured – not due to incorrect data entry or incompatibility between data formats, but as a result of integral failings in the performance-measurement system itself.

For many middle offices, the abnormal returns generated by certain systems are often viewed simply as a fact of life, with manual workarounds being routinely employed as and when anomalies occur.

These workarounds can range from simply instructing the system to ignore the anomaly, removing it from the results generated or changing the methodology or assumptions used.


Negative effects
But of course, such an ad-hoc approach can have serious consequences for the business in question.

Firstly, manually identifying and fixing such problems can be a serious drain on performance teams’ time and can divert resources away from activities which add more value. Given that the data volumes teams are expected to handle are likely to increase, this problem – if left unchecked – will only get worse before it gets better.

A second issue relates to what happens when abnormal returns are not identified. In cases like these, there is a risk that erroneous figures can make it into end-user reports, damaging confidence in the whole process or perhaps leading to misguided investment decisions. As a final issue, it should be borne in mind that manually overriding data anomalies without a full and proper audit trail can create problems when it comes to governance and regulatory compliance.


Sources of the problem
So what are the most common reasons for abnormal returns to appear in performance reports? Well, the bad news is that they typically relate to occurrences which are not particularly unusual in most portfolios, such as large cashflows or sizeable transactions relative to the existing holding of a particular security.

When transactions such as these are carried out – especially if done so early or late on the day in question – they can have a disproportionate or misleading impact on the performance team’s calculations of portfolio returns.

The extent of this impact can depend on factors such as the time-weighting assumptions employed by the system. If these assumptions are too rigid and the approach taken by the measurement system is too inflexible, abnormal returns are likely to be the result and the burden placed on the performance team – who have to identify and “fix” these anomalies as required – is far greater than it otherwise might be.

Although significant progress has been made in the way investment performance is measured and analyzed over recent years, this is a serious flaw which still appears to be inherent in a large number of systems.

What is even more surprising is the fact that there is a reasonably simple solution to the problem of getting the approach to time-weighting right when it comes to dealing with unusually large purchases or sales of a security.

By configuring the performance measurement system to adjust the timing assumptions used when processing such transactions, many of the anomalies seen by middle offices in the instances described above can be automatically ironed out.

As a result, rather than generating what appear to be abnormal data, the system provides returns that are both realistic and internally consistent.

This approach allows the performance team to define how big an “unusually large” cashflow must be before this adjusted time-weighting is employed. And at the same time, analysts have the option whenever they wish of temporarily turning off this flexibility and resorting to the manual overrides used previously.

Ultimately, though, this smarter and more flexible approach will not only lead to more accurate data – it will also free up valuable performance-team resources to focus on more productive activities.


Takeaways

  • Ensuring data is “clean” is not the only challenge facing performance measurement teams.
  • Accurate data can also create erroneous, abnormal returns, for example when dealing with unusually large transactions.
  • Putting these issues right manually can be a significant drain on middle office resources.
  • It is possible, however, to configure performance measurement systems to deal with these anomalies automatically.


Next generation performance measurement CTA

Neil Smyth

Neil Smyth

Marketing & Technology Director, StatPro Group

Recent Posts