Anti-money laundering – four big factors that contribute to compliance failure
This is a guest blogpost by Partha Sen, CEO and co-founder, Fuzzy Logix
Money laundering was back at the top of the agenda recently when, on 26 June, the European Union’s Fourth Anti-Money Laundering Directive came into force requiring European member states (including the UK) to update their respective money laundering laws and transpose the new requirements into local law.
Primarily affecting those in the gambling sector – whose levels of laundering for criminal activity were considered “of concern” to the EU – the overall impact of this 4th AML Directive will be a requirement to “know your customers” better and enhance your AML and financial crime framework with more effective prevention and detection controls.
With customer, data and transaction levels increasing exponentially year-on-year, “knowing your customers” better has become a hugely mathematical and complex task.
As long ago as November 2009, Forrester published a research report entitled ‘In-Database Analytics: The heart of the predictive enterprise’. The report argued that progressive organisations “are adopting an emerging practice known as ‘in-database analytics’ which supports more pervasive embedding of predictive models in business processes and mission-critical applications.” And the reason for doing so? “In-database analytics can help enterprises cut costs, speed development, and tighten governance on advanced analytics initiatives”. Fast forward to today and you’d imagine that in-database analytics had cleaned up in the enterprise? Well, while the market is definitely “hot” it appears that many organisations have still to see the need to make a shift.
And that’s despite the volumes of data increasing exponentially since Forrester wrote its report meaning that the potential rewards for implementing in-database analytics are now even higher.
Given we can deliver our customers with analysis speeds of between 10 – 100 times faster than if they were to remove the data to a separate application outside of the database, we have a ‘hard metric’ that is very compelling in helping us convince prospects of the value of in-database analytics. It’s what gives us confidence that the shift to in-database analytics as the standard for data analysis is a question of time rather than choice. Quite simply, the volumes of data that are increasingly being created mean that the only way to process the data and find analytical value is by doing so within the database. But, as ever, real world examples are the best way to illustrate a point so let’s take an unusual one: money laundering.
Banks have a vested interest in ensuring they stay compliant with the regulations in place for catching and reporting anti-money laundering (AML). The regulations have been in place for several years, and it is likely that most large banks have systems/processes in place to track and catch money-laundering activity. Despite this, we still hear about cases where the authorities have fined reputable banks for their failure to implement proper AML solutions. Very recently, in May 2017, Citigroup agreed to pay almost $100m and admitted to criminal violations as it settled an investigation into breaches of anti-money laundering rules involving money transfers between the US and Mexico, its two largest markets. Earlier this year, Deutsche bank was fined $650m by British and US authorities for allowing wealthy clients to move $10 billion out of Russia. So why are current implementations/best practices not keeping up?
Four impediments to compliance
Let’s look at 4 big factors that contribute to compliance failure in the realm of anti-money laundering:
- The number of non-cash transactions around the world is exploding. Think about applications like Uber, Venmo, PayPal, Xoom and various international money transfer services (almost every large retailer is now in the Moneygram business) – the number of non-cash transactions in the world has exploded. The number crossed $426 billion in 2015, and is growing at over 10% annually. Conclusion: There’s just a lot more hay to find the needle in.
- Legacy solutions were largely rules-based. A programme would run through the list of rules, and if any transaction matched a rule, a STR (suspicious transaction report) would be raised, meaning the transaction needed deeper analysis – often with a human in the loop. These rules were good at some point in time, but there are several issues with any rules-based system: (a) Rules are static, and require constant periodic refreshes. For instance, is a $4000 deposit to a rural bank typical of a crop cycle, or an anomaly? (b) Rules do not often transcend cultural/economic/geographical boundaries – e.g., a $400 hotel room in Washington D.C. may be normal, but in rural India it might warrant a second look. Finally (c) Rules present an inherent bias and don’t look at the data to determine what is ‘normal’ and therefore what pops out as ‘abnormal.’
- The analysis of AML is mathematically complex. Incidence of fraud is a rare event, compared to the data that needs sifting through, and catching AML transactions involves looking deep into the history of the account. Each time the system flags a transaction for a deeper look, a human has to investigate further. False positives mean a lot of resource and time investment for the bank. Couple this with the explosion in the number of transactions and the multiple channels through which non-cash transactions occur today, and you can see why systems that were put in place over a decade ago are in serious need of an upgrade.
- The sheer scale of the human capital cost of AML is mind blowing. Many financial institutions I have spoken to have described the ‘human army’ required to both manage and resolve the escalations of ‘suspect transactions.’ And on top of the army of staff already in employment, each false positive means a further increase to constantly review these new transactions. A cursory search online reveals one institution currently has 95 openings in the AML space, with another looking to optimise it’s ‘army’ through automation. Large institutions are typically dealing with a true-positive rate of 2-3% (number of SARs filed out of the total number of cases looked at). Adding a solution that can increase this true positive rate by another few percentage points , then, is almost a slam dunk. Offering greater productivity without needing to increase the size of the army, or offering to maintain productivity levels with fewer staff means they’re all ears!
With the money at stake for money launderers (according to the UN, $2 trillion is moved illegally each year), the efforts taken by criminals to avoid detection have become incredibly sophisticated. Organised crime is continually seeking ways to ensure that the process of money laundering is lost within the huge amounts of financial data that are now being processed on a daily, hourly and even by-the-minute basis. Their hope is that, because so much data is being processed, it is impossible to spot where illegal money laundering activity is happening. And they’d be right, if you had to take the data out of the database for analysis.
Achieving a good degree of accuracy in a typical large bank means having to analyse billions of data points from multiple years of transactions in order to identify irregularities in trends and patterns. A traditional approach would require moving the data to a dedicated analytical engine, a process that could take hours or days or more depending on the volume of data. This makes it impossible to perform the analysis in a manner that can provide any real value to the organization. With in-database analytics, there is no need to move the data to a separate analytical engine, and the analysis can be performed on the entire dataset, ensuring the greatest possible coverage and accuracy.
One-nil to the good guys
One of our largest customers is a leading retail bank in India. It was experiencing a rapid growth in data volumes that challenged its then-current AML processes. By not needing to move the data for analysis, we were able to analyse billions of data points over a number of years (3+) of historical data to identify possible irregularities in trends/patterns, and do so in under 15 minutes – faster than any other method. By not working to a pre-defined set of analytical rules and by letting the data ‘speak for itself’, it is possible to uncover patterns which occur naturally in the data. As a result, the bank is seeing an improvement of over 40% in terms of incremental identifications of suspicious activity and a 75% reduction in the incidence of ‘false positives’. In short, good guys 1, bad guys 0 because in-database analytics is having a very real impact on the bank’s ability to spot where money laundering is happening.
I’m pretty sure that when Forrester published its report into in-database analytics towards the end of the last decade, it didn’t envisage the fight to combat money laundering being a perfect case study for why in-database analytics is a no brainer when handling large volumes of data. But in today’s world, with ever increasing data volumes and the requirement to understand trends and insight from this data ever more urgent, in-database analytics has now come of age. It’s time for every organization to jump on board and make the shift; after all, if it can help defeat organized crime, imagine what it could do for the enterprise?