<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1331609480336863&amp;ev=PageView&amp;noscript=1">
Why Sarah Self, Aviva's AI Director Refuses to See Risk and Customer Value as a Trade-Off

April 29 2026 | Thought Leadership

Why Sarah Self, Aviva's AI Director Refuses to See Risk and Customer Value as a Trade-Off

The question most organisations get wrong when deploying AI is not whether to move fast or move carefully. It is whether those two things are actually in conflict.

For Sarah Self, AI Director at Aviva and former Chief Security Officer, framing risk management and customer value as opposite ends of a dial is not just unhelpful. It is a signal that an organisation has not yet thought clearly enough about what a good outcome actually looks like.

Speaking with Dr Raoul-Gabriel Urma on the Data and AI Mastery podcast, Sarah makes a case that will resonate with any senior leader trying to build confidence in AI deployment without losing control of the risks attached to it.

The CSO Who Became an AI Director

Sarah's route into AI leadership is not the conventional one. She spent close to a decade in cybersecurity, operating at the Chief Security Officer level, before moving into her current role at Aviva. It is a transition that might seem lateral from the outside, but Sarah sees the two disciplines as built on the same foundation.

Both require you to think across people, process and technology simultaneously. Both are in constant motion, evolving at a pace that demands genuine curiosity rather than the comfort of settled answers. And both, she argues, are fundamentally about enablement rather than restriction.

That last point is worth pausing on. A common misconception about cybersecurity is that its job is to stop things from happening. In reality, the point of effective security is to make the right things happen safely. The controls exist to protect the outcome, not to prevent it. Sarah brings exactly the same logic to AI governance, and it shapes everything about how Aviva has approached deployment.

Guardrails Are Not the Enemy of Speed

When Aviva decided to build its own internal AI platform rather than deploy off-the-shelf tools directly, the decision was not driven by caution for its own sake. It was driven by a clear-eyed view that cutting corners on governance in the early stages creates compounding problems later.

The platform, built on industry-leading large language models but architected and controlled by Aviva, came with transparency controls and hallucination controls embedded from day one. Not added later. Not retrofitted when something went wrong. Designed in from the start.

Sarah's framing here is direct. If you define what a good customer outcome genuinely looks like, data protection and ethical behaviour are not constraints on that outcome. They are requirements for it. An organisation that delivers something fast and cheaply but handles customer data poorly along the way has not delivered a good outcome at all. It has delivered a failure that simply has not surfaced yet.

The Real Cost of Leaving People Behind

Governance and platform architecture are only part of the picture. Sarah is equally emphatic about the human dimension of AI transformation, and she uses a phrase that captures the risk precisely: organ rejection.

Introduce a technically brilliant AI solution into an organisation where the people are afraid of it, resentful of it, or simply confused by it, and the result is the same regardless of the quality of the underlying technology. Resistance. Avoidance. Failure to adopt. The organisation produces the same outcomes as before, just with an expensive AI layer sitting unused on top.

Aviva's response has been a deliberate investment in democratisation. Internal communities, training programmes, and opportunities for every person across the business to engage with AI tools directly and develop their own understanding. Sarah's conviction here goes beyond commercial pragmatism, though she makes that case too. She believes access to AI capability is a genuine force for good in people's lives and that keeping it confined to a technical elite serves no one.

What Good Actually Looks Like

Two production use cases from Aviva illustrate the approach in practice. A claims summarisation tool reduced the hold time customers experience while handlers familiarise themselves with a case, cutting it by over 50%. A medical underwriting solution, which Sarah describes as industry-leading, accelerates the processing of complex personal medical histories at 99% accuracy.

Both cases share the same structure: a clearly defined problem, a measurable customer outcome, and AI deployed with the right controls already in place. Neither required a trade-off between speed and safety. The discipline of thinking about both together, from the outset, is precisely what made them work.

That is the lesson Sarah Self is taking from Aviva's experience into every conversation about AI at scale. Not that risk and value are in tension. But if you are still framing them that way, you are probably asking the wrong question.

At Cambridge Spark, we work with organisations navigating exactly this challenge: building the data and AI capability, adoption strategy and leadership confidence to turn experimentation into lasting transformation.

👉 Explore how we support data and AI transformation

Upskill your workforce

Upskill your workforce and accelerate your data transformation with expert technical programmes designed to create impact.

Contact us