"No plan survives first contact with the enemy." Helmuth von Moltke the Elder.
There's a version of AI adoption that looks impressive, planned out in a slide deck but that quickly unravels in execution. Models that nobody trusts. Outputs nobody can explain or justify. Productivity gains in one corner of the business that quietly break something three steps downstream. It’s become obvious that the difference between AI that delivers impact and AI that does not, comes down to three things: rigour, trust, and a genuine and open assessment of risk.
Rigour. The way AI processes and tools are tested and validated to work under critical conditions.
Trust. Trust has to be earned from the stakeholders who will be accountable for the tool in service
Risk. An appreciation that AI tools do not deliver deterministic outputs — meaning that there is a quantifiable risk that an output will at some point either be wrong or non-optimal.
Last week I had the privilege of helping to run three AI leadership courses for executives in the financial services industry. These weren't first-time awareness sessions – these were senior leaders on our Leading With AI programme, wrestling seriously with how AI fits into their organisations. What struck me wasn't the engagement and enthusiasm – though there was plenty of both – it was the nature of the questions being asked. Across all three courses, the same themes kept surfacing.
This is what those leaders were thinking about AI – and what it means for getting transformation right.
1. "Will AI make my team forget how to think?"
Beneath the excitement around AI’s productivity potential sits an anxiety: what happens to human capability when AI starts to make significant technical contributions? It’s an observed phenomenon in software engineering, that junior engineers that use code generation too much see their personal capability degrade.
So it’s a legitimate question. If practitioners stop executing critical tasks themselves – data processing, complex analysis, code debugging – does that skill quietly atrophy? And if it does, what's the fallback when the system fails?
The answer isn't to slow down AI adoption, it's to be smart about it. For some tasks, full automation is the right approach. For others, where AI is still needed for all the benefits it brings – think surgical procedures, landing an aircraft, or any mission-critical decision – maintaining human capability as a genuine fallback is essential, it's necessary governance. The conversation every leadership team needs to have is not "should we use AI here?" but "if AI fails here, can we still land the plane?"
2. "Our data is a mess – AI will have to wait."
A lot of companies believe their technical debt is a blocker. Years of legacy architecture, siloed platforms and inconsistent data standards – surely that has to be resolved before AI can deliver value?
Not necessarily. In fact, AI can actively accelerate the navigation of complex data environments and recover trust in the data and what it’s telling you. Proof-of-concept models can be stood up quickly using ad hoc interfaces and data cleansing agents, meaning you don't need a pristine data estate to start generating impact. The goal isn't perfection before you begin; it's learning fast, building trust in your data sources as you go, and understanding the quality and provenance of what's informing your models. In other words, the goal is getting to the modelling and outcome stage as quickly as you can.
Technical debt is certainly real. But now it's a reason to use smarter approaches, not a blocker on progress.
3. "We optimised one team – and broke another."
This one drives a systems-thinking approach. Leaders arrive thinking about AI within business functions. They leave thinking about AI across the entire business.
Making one function hyper-productive without enabling downstream teams to absorb that output doesn't create value – it creates bottlenecks, frustration, and risk. A sales team generating ten times the leads is a liability if operations, legal and compliance can't scale their execution to match.
Genuine AI transformation requires a holistic view of your business flows. That means collaborative governance, cross-functional engagement, and a clear assessment of the end-to-end process before optimising any single part of it. Done well, the return-on-investment transforms the organisation. Done in silos, it just moves the problem downstream.
The bottom line
AI is not a technology project. It's a strategic capability question – requiring thought and investment in people, process, risk-mitigation and governance, as much as models and data.
The leaders who will get the most from it are those willing to ask hard questions before they scale: Where does accountability sit when a model gets it wrong? What does good data provenance actually look like in our organisation? Which decisions should never be fully delegated to an algorithm?
These aren't questions that slow transformation down. They're the questions that make active transformation stick. The organisations building genuine, durable AI capability right now aren't moving faster than everyone else – they're moving more deliberately. They're training their people not just to use AI tools, but to understand them, interrogate them, and govern them responsibly.
That's the standard worth aiming for. From what we’ve seen on our flagship Leading with AI programme, it's exactly the thinking that leaders are engaging with – they just need the space and the tools to do so.
References
- How AI assistance impacts the formation of coding skills. Anthropic. 29 Jan 2026.
- Employability Risk: Is Excessive AI Use Sabotaging Your Developer Skills?. Nicolas Frasson Boaroto. 4 Dec 2025.



