Op-ed: The AI safety champion – and why every tech team should have one

Authored by Robert McBlain, Global Data Protection and AI Compliance Lead at Thoughtworks
The debate around AI in software delivery has moved past speculation. Teams aren’t asking if they’ll use AI, they already are. The real question is how to adopt AI systematically, without creating new classes of risk.
A decade ago, “mobile-first” forced organisations to redesign everything about digital experiences, from user interfaces to backend systems. “AI-first” is a similar inflection point. It doesn’t mean every task should be handed to a model. It means AI must be considered at every stage of the lifecycle, from discovery and design to testing, modernisation, and ongoing operations.
But there is a tension emerging. Moving fast with AI is easy. Moving safely can be much harder.
AI first doesn’t mean AI only
AI-first is not about replacing people, but about embedding AI into workflows from the very beginning, rather than an afterthought. It’s about re-thinking how AI can augment the first step of our project. This shift requires more than technology; it needs behaviour change. People must get hands-on, experiment, and learn where AI adds value, where it falls short, and how their roles will evolve. Until teams use these tools, they won’t know what they do well and what they don’t.
The first stage of excitement around AI in software delivery focused on developer productivity. Many teams hoped coding assistants would double the speed of delivery. The reality has been more modest, closer to a 10% lift, according to our data. Useful, yes, but not transformational. Where AI is providing more power for organisations is the sheer breadth of its applications.
Engineering teams are using AI to support platform operations by suggesting fixes and accelerate testing by auto-generating test scripts. And one thing we are helping many clients with, is tackling the modernisation of legacy systems that once seemed impossible to fix. AI-assisted tools can analyse decades-old code, map hidden dependencies, and propose new paths. Suddenly, systems that felt frozen in time have options. But here too, speed can be deceptive. AI can just as easily accelerate the spread of technical debt if left unchecked.
The invisible risks of AI-first delivery
Most organisations, on paper, have AI ethics frameworks or governance policies. But unfortunately, these rarely reach the trenches of delivery, where small decisions can accumulate into bigger consequences. For example, a developer may paste confidential information into a chatbot to get a quick answer or may merge AI-generated code without realising it lacked security checks. Another risk we see with clients is that they may deploy a model, but then never monitor it, allowing it to degrade. A model trained on last week’s data, may struggle with today’s context. Even the latest models can start producing responses that feel old or not correct. Models can quickly start to introduce bias or lose accuracy.
Why every team needs an AI safety champion
This is why we advocate embedding an AI safety champion inside delivery teams. They’re not compliance officers standing on the sidelines. They’re practitioners who help colleagues use AI responsibly while keeping projects moving. They have the ability to translate between technical and business stakeholders and a strong understanding of an organisation’s existing systems and workflows, with ongoing collaboration from data and AI teams to supplement work. AI safety champions can also help build workforce and public trust in AI applications and safeguarding the fair distribution of of AI benefits, with safe and equitable deployment.
Day-to-day, an AI safety champion might be on hand to flag when confidential data shouldn’t be entered into a tool, ensure AI-generated code is reviewed for quality and security, help a team set up safe sandboxes for experimentation, or translate abstract governance policies into practical working processes. We’ve embedded AI safety champions into internal projects and client engagements worldwide. The result is faster innovation with fewer surprises, because risks are surfaced early, not after they cause damage.
Nobody knows what AI-first delivery will look like in six months, let alone five years. Technology is evolving too quickly. Organisations that succeed will treat AI adoption as a series of hypotheses to test, not as a one-off rollout or a tick box exercise. Here too, AI safety champions play a role. They ensure experiments are run in a way that generates learning without introducing uncontrolled risk.
The leadership imperative
For technology leaders, the choice is not whether to adopt AI-first practices, but how. Will you approach them deliberately, embedding safety into the delivery process? Or will you let adoption spread informally, only to confront risks later?
Embedding AI safety champions is one of the most pragmatic steps you can take today. They bridge the gap between high-level AI principles and the realities of teams under delivery pressure. They don’t slow progress, they make it real, reliable, and repeatable. But in an era where hype moves faster than reality, they can help make sure your AI-first future is built to last.
Photo courtesy of Thoughtworks