FCA sets out governance blueprint as banks embrace synthetic data

Jessica Rusu, the FCA’s Chief Data, Intelligence and Information Officer

Synthetic data is moving rapidly from experimental concept to practical tooling across UK banking, payments and financial markets, and the Financial Conduct Authority (FCA) is now signalling what “responsible adoption” must look like.

In a new report published by its Synthetic Data Expert Group (SDEG), the regulator outlines governance expectations that directly affect QA, testing and model-risk teams as financial firms scale up machine-learning and automation programmes.

In her foreword, Jessica Rusu, the FCA’s Chief Data, Information and Intelligence Officer, frames synthetic data as a catalyst for safer digital innovation.

“Synthetic data is one such technology,” she wrote, noting its potential to “unlock the value of data, enable experimentation, model development, and broader innovation across the financial system, all while maintaining strong privacy protections and public trust.”

The SDEG, which is a cross-industry group spanning banks, fintechs, academia and public-sector specialists, was convened in 2023 to assess how synthetic data can be used safely in areas such as fraud detection, customer analytics, testing, model training and fairness reviews.

The FCA said the first SDEG report highlighted “promising use cases” across fraud, credit and customer insights, while this second publication focuses squarely on governance, controls and practical deployment.

For software testing and QA teams, the report is explicit: synthetic data is increasingly necessary for robust model testing, pipeline validation and privacy-preserving experimentation. But its use carries architectural, legal and operational risks, all of which must be understood at design stage.

Rusu described the FCA’s aim as enabling “open and practical conversations about how synthetic data is being used, where the challenges lie, and what’s needed to move forward responsibly.”

She added that by convening expert groups, “we can lower the barriers to adoption, build confidence in new techniques, and build a more competitive, future-ready financial system.”

What this means for QA and model-testing teams

Across more than 40 pages, the FCA report offers what is effectively a governance checklist for firms intending to use synthetic data in testing and model development. Among the key themes:

Auditability is treated as essential. Firms must be able to trace every design choice, including generation methods, privacy trade-offs and transformation steps, to ensure model-risk functions can validate whether synthetic inputs meet regulatory expectations around fairness, accuracy and stability.

Also, synthetic data can remove or amplify bias depending on how it is applied. The report emphasises that teams must evaluate real data for bias before generation, document every manipulation decision, and retest synthetic datasets iteratively to ensure they do not distort outcomes in credit, fraud or customer-impacting models.

Although synthetic data reduces privacy risk, it is not automatically “safe”. The FCA urged firms to conduct privacy-risk assessments, consider adversarial testing, and apply multiple privacy metrics, recognising that no single test is sufficient.


“We can lower the barriers to adoption and build confidence in new techniques.”

– Jessica Rusu

The report highlighted Train-Synthetic-Test-Real (TSTR) evaluation as a critical tool for QA teams. Synthetic datasets may appear statistically sound yet still degrade performance when exposed to real-world conditions. Performance benchmarking must therefore be continuous, with detailed documentation supporting each iteration.

The FCA characterised synthetic data adoption as a structural change in how financial services manage and test models.

Rusu wrote that the regulator’s goal is to support “safe experimentation in a fast-moving space” and expresses gratitude to the SDEG for shaping practical insights that organisations can apply immediately.

She concluded: “I hope the insights and actions in this report will be informative and encouraging for those exploring the use of synthetic data in financial services.”

For QA and testing teams, the message is clear: synthetic data is becoming foundational to how banks build, test and validate next-generation systems, but its deployment must sit within rigorous, transparent and continuously monitored governance frameworks.


Why not become a QA Financial subscriber?

It’s entirely FREE

* Receive our weekly newsletter every Wednesday * Get priority invitations to our Forum events *

REGISTER HERE TODAY




REGULATION & COMPLIANCE

Looking for more news on regulations and compliance requirements driving developments in software quality engineering at financial firms? Visit our dedicated Regulation & Compliance page here.


READ MORE


WATCH NOW


QA FINANCIAL PODCASTS

Listen to Sudeepta Guchhait on Nasdaq’s new Mimic AI testing platform
QA Financial sits down with Sudeepta Guchhait, Senior Director of Product Framework & Quality Engineering at Nasdaq

——–

Listen to Wesley Scheffel and Robin Rain on Schroders’ DevOps strategy
We catch up with Wesley Scheffel, Head of Cloud Platform and Product Engineering at Schroders, and Robin Rain, Head of Cloud Platform Architecture

——–

Listen to Citi’s Jason Morris on Lightspeed and the future of continuous delivery
Jason Morris, Head of Developer Pipelines for Securities Markets and Banking at Citi, talks about Lightspeed