When conversations turn to AI in financial services, efficiency and automation often dominate the narrative. But for Edmund Towers, Head of Advanced Analytics & Data Science Units at the Financial Conduct Authority (FCA), the real promise of AI lies elsewhere: protecting consumers, preventing harm, and enabling trust at scale.
In his conversation with Dr. Raoul-Gabriel Urma on Data & AI Mastery, Edmund offers a rare view from the regulatory front line, where data science isn’t an abstract capability, but a practical tool shaping how markets operate and how people are protected.
A common misconception is that regulation slows innovation. Edmund challenges this head-on.
At the FCA, innovation and regulation are deeply intertwined. Financial services evolve rapidly, and regulators must evolve with them, not only to keep pace, but to create space for safe experimentation.
This is where outcomes and principles-based regulation comes in. Rather than prescribing specific technologies or methods, the FCA focuses on the outcomes that matter: fair treatment, transparency, and good consumer results. This approach allows firms to innovate while remaining accountable for real-world impact.
AI, Edmund explains, makes this balance even more important. Its power demands guardrails, but also thoughtful collaboration.
Edmund leads teams applying advanced analytics and machine learning across some of the most critical challenges in financial services:
These are not theoretical use cases. They are deeply human problems, often affecting people at their most vulnerable moments.
Data science enables the FCA to operate at scale, identifying patterns no individual could see, and acting earlier than traditional approaches would allow. In this sense, AI becomes a force multiplier for public good.
One of the most compelling parts of the episode is the discussion of the FCA’s AI Live Testing initiative.
Too often, AI projects stall at the proof-of-concept stage. Firms struggle to move from experimentation to deployment, particularly in regulated environments where the risk of unintended consequences is high.
AI Live Testing addresses this gap by bringing together regulators, firms, and technical teams in controlled, real-world testing environments. The goal is not to rubber-stamp solutions, but to:
This collaborative model reflects a broader shift in regulation, from static oversight to dynamic engagement with technology.
Across the conversation, one theme keeps resurfacing: trust.
For AI to deliver value in financial services, it must be trusted by:
That trust is built through transparency, explainability, and clear accountability. Edmund stresses that AI systems must be understandable not just to data scientists, but to decision-makers and supervisors as well.
Without trust, adoption stalls, or worse, harm occurs.
Another important enabler Edmund highlights is synthetic data.
Access to high-quality data is often a barrier to innovation, especially when privacy and confidentiality are paramount. Synthetic data offers a powerful alternative: enabling experimentation and model development without exposing real customer information.
By collaborating with organisations such as the Alan Turing Institute, the FCA is helping to advance these techniques, opening new pathways for innovation while managing risk responsibly.
Perhaps the most forward-looking part of the conversation is Edmund’s vision for the future.
He imagines a world where AI agents or digital twins help consumers navigate complex financial decisions, from choosing products to managing long-term financial health. In this future, AI doesn’t just serve institutions; it actively empowers individuals.
But realising this vision will require continued collaboration between technologists, firms, and regulators and a relentless focus on outcomes that genuinely improve people’s lives.
When asked how he stays up to date, Edmund emphasises community over mastery. No one can know everything in AI. What matters is staying curious, learning from others, and engaging in open dialogue across disciplines.
For leaders in regulated industries, this mindset is essential. AI will keep evolving, but the responsibility to use it wisely remains constant.
AI in financial services isn’t just about speed or efficiency. It’s about trust, protection, and impact.
As Edmund Towers’ work at the FCA shows, data science can be a powerful tool for public good, when it’s guided by clear principles, collaborative regulation, and a focus on real outcomes.
At Cambridge Spark, we help organisations build the data and AI capabilities needed to operate responsibly at scale, combining technical excellence with leadership, governance, and human judgment.
👉 Explore how we support responsible AI and data leadership.