QA Financial Forum New York | 15 May 2024 | BOOK TICKETS
Search
Close this search box.

UK regulators highlight increased use of ML

etienne-martin-2-k82gx9uk8-unsplash-1575662500

A report published by the Bank of England and the Financial Conduct Authority – Machine Learning in Financial Services – concludes that the number of machine learning-based apps will nearly triple over the next two years and  IT risks could be amplified as a result.

Machine learning – whereby algorithms  make decisions or recognise patterns from data without being explicitly programmed to do so – has developed to the point that ML models can often make better predictions than traditional models, said the report’s authors.
The Bank of England and the FCA sent their survey to nearly 300 firms in the first half of 2019, including banks, brokers, insurers and investment managers, and received 106 responses.

The survey found that that ML is increasingly used in UK financial services, with two thirds of firms saying they are employing it in some form. Deployment is most advanced in the banking and insurance sectors. The single most common use is for anti-money-laundering and fraud detection systems.

Respondents said that they expected the number of ML apps used by them to nearly triple over the next two years.

“The promise of ML is to make financial services and markets more efficient, accessible and tailored to consumer needs,” the report said. “At the same time, existing risks may be amplified if governance and controls do not keep pace with technological developments.” One such risk is that model validation and governance frameworks will be enable not keep pace with technological developments in machine learning.

While supervisors at the Bank of England and the FCA say they are “technology neutral”, the report’s authors note that ML use: “can alter the nature, scale and complexity of IT applications, and thus a firm’s IT risks.” There are three main reasons why, the report said: ML application s are complex, they use new and complex data-sets and they are usually based on a large number of inter-acting components. respondents to the survey also said that risks can be caused by the lack or difficulty in explaining ML models, and they also expressed diverse views on how their firms could  manage the ethical issues involved in employing machine learning apps.  

A key finding of the report is that financial firms are keen to improve their validation frameworks for testing that ML models work as intended. The most popular method used to validate ML models is monitoring how decisions and outcomes perform against benchmarks, including A-B testing against outcomes proceed by non-ML apps.

The majority (76%) of ML use cases have been developed internally by respondent firms, rather than outsourced to third party vendors. And the most common safeguard that firms are employing are alert systems and “human-in-the-loop”  mechanisms. These can help flag if the ML model is not working as intended, such as in the case of ‘model drift’, which can occur as ML applications are updated and then make decisions outside their original parameters. “Some respondents highlight the need for model lifecycle management platforms to enable continuous monitoring of model performance,” the report said.

The Bank of England and the FCA have announced plans to establish a public-private group to explore technical questions raised in the report, and they have also said they plan to repeat the machine learning survey in 2020.