14 February 2018 - Post by:Ian Rodgers
The Head of the Financial Crime Department at the UK Financial Conduct Authority (the FCA), Rob Gruppetta, gave a speech on “Using artificial intelligence to keep criminal funds out of the financial system” in December 2017. In it, he explored how artificial intelligence (AI) could potentially be used to prevent financial crime, and for anti-money laundering (AML) purposes in particular. Although there were sure to be challenges, he concluded, AI had the “capability to greatly amplify the effectiveness of the machine’s human counterparts” in this area. This article highlights four potential risks that a firm’s legal advisers may want to consider before AI is incorporated into their institution’s financial crime or AML processes.
Financial crime and AML innovation
It is estimated that British banks spend GBP5,000,000,000 each year fighting financial crime. Much of this is spent on trying to prevent money laundering (not always, history has taught us, successfully: the UK National Crime Agency estimates that hundreds of billions of pounds of criminal money is still laundered through UK banks each year). In such circumstances, it comes as no surprise that firms and authorities alike are interested in market innovation that can strengthen AML processes and reduce the cost of compliance.
Take transaction monitoring, for example, which was the main focus of Mr Gruppetta’s speech. In this context, AI may help firms reduce the number of “false positives”. These are the long-standing enemies of AML processes, instances which require investigation but no substantive action. AI machine learning techniques, even at their current stage of development, may be capable of reducing these costly detours into irrelevance by around 20-30 per cent. It appears that at least one financial institution has already implemented such a system. In light of this, we may expect to see other firms putting in place similar AI systems.
AI decisions may be taken in a way which cannot be easily understood by those who use the software. Mr Gruppetta described this as a potential lack of “interpretability” of AI.
In a financial crime and AML context, such a potential lack of interpretability may pose a problem. Firms typically need to be able to explain why and how a particular decision was made (for example, what was the basis for a suspicion of money laundering, or lack thereof). This is important for a firm’s internal systems and controls and could also be necessary in the context of regulatory enforcement action.
There are, it should be noted, circumstances in which AI decisions or actions may be taken in a way which can in fact be easily understood by those who use the software. However, this interpretability may make a system less effective if it necessitates a reduction in the complexity of the technology. This is a point made by the FCA when Mr Gruppetta said that “a firm may need to carefully explore the trade-off between interpretability and performance when choosing what [AI] systems to use”.
The FCA has not, however, provided any guidance as to where the line ought to be drawn. Instead, it has been left to firms themselves to consider on a case-by-case basis. This provides a challenge of fundamental importance and regulatory risk in a financial crime and AML context: get it wrong, and the new system may end up causing more problems than it solves.
AI systems have the potential to reinforce pre-existing human biases. A machine has no predetermined concepts about right and wrong, only those which are programmed into it. A system that can learn for itself may act in a way unforeseen by its creators, and contrary to their original intentions.
There are, for instance, examples of AI systems in other contexts showing fewer high-level job openings to female applicants or recommending harsher sentences for ethnic minorities. It is not hard to imagine an analogous situation occurring in a financial crime or AML context. For instance, a transaction-monitoring system based on an unsupervised machine learning algorithm could potentially make decisions prejudicial to certain names, places or even genders. The repercussions of this could be significant for a firm, both from a financial and reputational point of view.
However, there does not yet appear to be a clear way to limit this risk without detracting from the performance of a system. Pre-programmed rules typically limit the intelligence of a machine, but an overly flexible learning-based approach may pave the way for potential lawsuits or PR headaches. This is, therefore, another area in which firms and their advisers should consider the options available to them and seek to find the right balance before implementing AI into their financial crime or AML processes.
In a financial crime and AML context, firms may need to primarily rely on software provided by third parties where they do not have the expertise necessary to develop their own AI programs in-house. This may create a third-party dependency which needs to be identified and managed from the outset of the contractual process. This may give rise to new clauses to manage risks specific to AI systems, although the legal principles involved, at least for now, will be the traditional ones with which we are familiar (contract, property, tort and so on). For example, the ownership and/or control of new code generated by the AI itself while utilising a customer’s data will need to be dealt with under the contract.
Although firms will have at their disposal the legal tools to effectively manage their own relationships with third-party software providers, their collective actions may nevertheless give rise to systemic risk. This is a point that has been raised by the Financial Stability Board (FSB), an international body that monitors and makes recommendations about the global financial system. The FSB’s report on “Artificial intelligence and machine learning in financial services” noted that “banks’ vulnerability to systemic shocks may grow if they increasingly depend on similar algorithms”. It follows that if systemically important financial institutions begin to rely on similar AI software and services from third-party providers and there was to be a systemic shock, such as a popular third-party provider failing or a virus affecting a widely used AI program, then there could be the risk of widespread financial crime and AML systems and controls failures across the financial sector. This is unlikely to escape the notice of authorities with responsibility for regulating systemic risk.
AI and data protection
Using AI in a financial crime and AML context will typically involve the retention and automated processing of vast amounts of personal data, some of which may be sensitive. This will need to be done in accordance with data protection laws.
In particular, the EU General Data Protection Regulation (GDPR), in force from 25 May this year across all Member States, contains provisions dealing specifically with automated decision making. Article 22 GDPR has specific rules to protect an individual where an entity is carrying out solely automated decision-making that has a legal or similarly significant effect on that individual. Certain potential automated AML decisions, such as the freezing of assets, may have a significant effect on individuals, so they are likely to be caught by Article 22.
There is a debate whether Article 22 prohibits fully automated decision making, subject to certain exceptions; or whether it provides an individual with the right to object to fully automated decision making, subject to certain exceptions.
The Article 29 Working Party (the group of EU data protection authorities charged with agreeing European-wide guidance on GDPR) guidance supports the former interpretation. The guidance was subject to a consultation, with the final version of the guidance due to be issued this month.
The exceptions in Article 22 are:
- Where the automated decision making is necessary for performance or entry into a contract;
- Where the automated decision making is expressly authorised by EU or Member State law; or
- The data subject has given explicit consent.
The exceptions are more limited where certain special categories of data are processed (eg race, ethnicity, religion, health). If any data in these special categories is used by AI in the financial crime or AML context, the only exceptions are explicit consent or where the processing is “necessary for reasons of substantial public interest” and is “proportionate to the aim pursued”.
A careful analysis of which exception(s) apply will be required, typically involving a data protection impact assessment (DPIA), as these exceptions are likely to be interpreted very narrowly.
Even where an exception applies, the GDPR requires certain safeguards including:
- transparency – a data controller needs to be able to describe to a data subject the existence of any automated decision making, meaningful information about the logic involved and the significance and envisaged consequences of the processing for the data subject; and
- a right to obtain human intervention and challenge a decision – this is a further layer of protection and the controller must provide a simple way to exercise these rights.
Firms should consider their approach to algorithmic “interpretability” (above) with this in mind.
The GDPR concept of ‘privacy by design’ will also require the design of any new technologies, including new AI systems to be used in a financial crime and AML context, to factor in privacy considerations at the outset. For more information on the GDPR please see a guide by Allen & Overy data protection experts.
Financial crime and money laundering typically have an international dimension. An AI system designed to prevent it needs to be global in scope. This, of course, will give rise to cross-jurisdictional challenges. For example, as noted above, AI systems are typically intensive users of data. Although the GDPR will provide a level of harmonisation within EU member states, it is not yet clear how legal questions of data protection will be addressed at a transnational level to enable firms to create AI systems that are fit for purpose in a global economy.
It is also, perhaps, worth noting that this article has been written from the perspective of a UK lawyer because the FCA has made tackling financial crime and money laundering a priority, and has been active in promoting innovation in this space. But the concepts it explores are global in nature. And it will not be long, we expect, before the agenda set by Mr Gruppetta and the FCA becomes a more widespread topic of focus.
It is clear that AI technology needs to be approached with care. As with any innovation, there are potential risks as well as benefits. We have considered here how AI software may not always make decisions that are interpretable by humans, how it has the potential to reinforce biases and act contrary to the intentions of its programmers, how individual firms’ relationships with third-party providers could give rise to systemic risks and how any use of AI to tackle financial crime will need to be in accordance with data protection laws. These risks may be just the tip of the ice-berg too.
Having said that, it seems likely that disruptive technological innovation will affect financial crime and AML processes in the near future. AI may, and most probably will, be used to overhaul existing systems and both better prevent money laundering and reduce the cost of compliance for firms. So long as the risks are identified early and subsequently carefully managed, AI has, as the FCA’s Rob Gruppetta pointed out, the potential to “keep criminal funds out of the financial system”. This, we are sure everyone reading this will agree, can only be a good thing.
This article also appeared in the February 2018 edition of the Allen & Overy Legal & Regulatory Risk Note.