Regulatory technology (RegTech) providers aim to help businesses comply with regulatory change in the financial services sector. However, with the tremendous increase in data volumes, manual processing has become expensive and time consuming.

In many cases, this has led to increased automation and the integration of artificial intelligence (AI) into RegTech applications, to help predict risks and aid compliance. In others, supervisory technology (SupTech) is guiding internal processes.

Financial services companies have long been in the vanguard of AI and automation, and AI is expected to drive RegTech growth over the next seven years, according to Transparency Market Research.

The analyst firm has published a new report, Artificial Intelligence and the RegTech Market: Global Industry Analysis, Size, Share, Growth, Trends and Forecast 2018–2026. It explains that, based on project type, the solutions market can be segmented into employee surveillance, compliance data management, fraud prevention, audit trails, and more, in industries that embrace IT and telecommunications, healthcare, energy and utilities, as well as banking, financial services, and insurance (BFSI).

This crossover of the RegTech and AI markets should come as no surprise, given the increased scrutiny of financial services providers in the wake of the 2008-09 crash. Risk hotspots such as Know Your Customer, financial terrorism, and money laundering add to the challenge, as do regulatory change and the rise of open banking.

Asia Pacific and the Middle East & Africa are expected to be the most promising markets for AI-enabled RegTech, due to significant investments in BFSI, adds the report – confirming the findings of a number of analyst surveys in recent months, which see APAC leading growth.

But not everyone in the region is convinced of the wisdom of rushing into AI-enabled RegTech, especially when attractive concepts such as risk avoidance and fraud prevention are in the frame. After all, it could be argued that sidestepping the risk of a crime being committed is the same as no wrongdoer actually existing.

In October 2018, the financial services industry in Hong Kong raised concerns about the use of AI and machine learning in regulatory compliance. While the ability to predict risk may be advantageous in the sector, it suggested, the technology may actually reduce banks’ ability to meet their accountability and auditing commitments, to customers as well as to regulators.

According to a report in the South China Morning Post, Guy Sheppard, APAC head of financial crimes, analytics and intelligence for transactions network Swift, explained that a bank’s internal RegTech and risk assessment model may dictate how it ‘off-boards’ certain customers before they’ve done anything wrong.

“I would like to see how regulators will address a bank off-boarding a certain profile of customers after their internal model has dictated that they are beyond the bank’s stated risk appetite,” he said.

Neural networks and deep learning solutions are often termed ‘black box’ technologies, because while they use sophisticated structures, algorithms, and mathematical models to process data, they may offer little clue as to how they arrived at a conclusion.

More, AI is adept at identifying patterns in data – which is why it holds out the promise of preventative healthcare, for example – but a pattern may not constitute actual evidence of a crime, or that a particular type of customer is a criminal.

Such a lack of transparency and inability to ‘show their workings’ is a pain point that regulators may have to address – or accept – with RegTech applications. This is particularly the case when many AI systems need to be trained with data that may contain institutional or historic biases.

Be part of a discussion and connect with like-minded leaders in your sector at our exclusive event series on banking and RegTech.