Governing AI for humanity

There have been several developments at both an international and UK level exploring how best we can govern and regulate AI, which is developing rapidly with exciting new opportunities but also potential threats emerging. In September 2024, the United Nations High Level Advisory Body on AI published its final report, Governing AI for Humanity. This notes the urgent need for global governance, and the current inequity in representation in such governance. It has several recommendations including policy dialogue, capacity development, a global AI data framework and a global fund for AI. Delivering any of these recommendations requires global co-operation. In the UK, the government published its AI Opportunities Action Plan on 13 January 2025. On Wednesday 29th January 2025, we hosted an evening discussion at The Royal Society to explore what needs to happen at a global level, the UK’s approach, domestically and internationally and how we can maximise the benefits whilst minimising the risks. Our panel of expert speakers included Dr Douglas Gurr, Director of the Natural History Museum and Chair of The Alan Turing Institute; Professor Dame Wendy Hall DBE FRS FREng Regius Professor of Computer Science at the University of Southampton, and Member of the UN High Level Advisory Board on AI; Adrian Joseph OBE, Board Member and AI Advisor (DirectLine Group, National Lottery, GOSH and Natwest) and former Chief Data and AI Officer at BT Group; and with Feryal Clark MP, Parliamentary Under-Secretary of State for AI and Digital Government, joining the panel for the discussion period.

DOI: https://www.doi.org/10.53289/YERS7658

Embracing AI without the risk

Adrian Joseph OBE

Adrian Joseph OBE is one of the UK’s leading applied data and AI focussed technologists with over 25 years’ experience in AI, big data, analytics and digital transformation. Currently, Adrian is a Non-Executive Director at Direct Line Insurance Group plc, Allwyn Entertainment (UK National Lottery operator) and at Great Ormond Street Hospital for Children. He also sits on the Technology Advisory Board of NatWest Group. His advisory roles extend to the private equity sector and multiple AI-centric startups.

Summary:

  • Once believing that governance slowed things down, my perspective shifted when I experienced firsthand the consequences of neglecting it
  • It is our responsibility—whether as board members, policymakers, or leaders—to find the right balance between strategy, governance and risk, and resource allocation  
  • Boards often find themselves torn between two fears: the dread of a potential AI catastrophe and the anxiety of missing out and falling behind the competition
  • Just as a Formula 1 car needs an expert driver, pit crew, spectator inputs and safety systems to go faster, AI requires regulations and safeguards to ensure both effective performance and safety.

I have had a complicated relationship with AI and data governance. At times, I felt almost allergic to it. Many individuals in governance roles seemed disconnected from the technology itself, its practical applications, and the real-world risks associated with it—risks were often presented to me in a disastrous way, without considering their probability or materiality, or they were purely theoretical. Instead of fostering innovation, governance often slowed things down with layers of bureaucracy, complexity, and red tape.

My perspective shifted when I experienced firsthand the consequences of neglecting AI governance. I was involved with a significant AI and data migration programme for a FTSE 100 company that was expected to deliver hundreds of millions of pounds in value over the medium term. We were making excellent progress until we hit a wall. During this transformation, we detected and self-reported a substantial cybersecurity risk that could have exposed the personal information of millions of customers. Fortunately, we caught it in time. However, we then faced the daunting task of standing in front of the board to explain that we needed to pause a major strategic programme for the company. You can imagine how well that went down. Despite the efforts of top-tier internal teams, highly paid consultants, and a leading cloud provider, our review uncovered several critical security risks along with several medium- to low-risk concerns. The outcome was a six-month delay in one of the company’s top three strategic programmes.

This experience taught me something invaluable, which I observe in many boards I work with today. Boards often find themselves torn between two fears: the dread of a potential AI catastrophe and the anxiety of missing out and falling behind. They struggle with the dilemma of either letting the genie out of the bottle or trying to lock it away forever. It is our responsibility—whether as board members, policymakers, or leaders—to find the right balance between strategy, risk, and resource allocation.

AI as an accelerator, rather than a brake

So, how can we govern AI in a way that promotes innovation instead of hindering it? How can we ensure that AI acts as an accelerator rather than a brake?

Here are five key areas where effective AI governance can make a difference:

1. Shape Strategic Direction: When done right, AI governance aligns with corporate values, regulatory requirements, and long-term strategic goals. It helps organisations to build ethical, compliant, and sustainable AI systems.

2. Empower Responsible AI: Governance should focus on people, not just policies. Through training and education, good AI governance ensures employees understand how to use AI safely and responsibly, creating a culture of trust and ethical deployment.

3. Measure Value and ROI: Well-implemented AI governance provides frameworks for tracking investments, measuring return on investment (ROI), and ensuring AI initiatives deliver tangible business and societal value. In many organisations I work with, we evaluate four key levers of value:

  • Revenue Growth: Can we identify the best customers for our B2B teams through effective models, for example?
  • Efficiency Improvements: Are we able to reduce costs, such as optimising field force teams, potentially cutting costs by 20% while also reducing CO2 emissions?
  • Enhanced Customer Experience: Can AI connect the right customer with the right representative for better service and improved upselling opportunities?
  • Risk Mitigation and Management: From fraud detection to quickly extracting contract obligations from extensive documents, good governance enhances risk management capabilities.

4. Drive Adoption: Effective governance should provide a holistic view of AI activities across the organisation, reducing duplication, identifying opportunities, and accelerating adoption. It needs to be a coordinated effort rather than a series of fragmented experiments.

5. Enable Smarter Decisions: Good governance frameworks assist organisations in making informed buy vs. build decisions, assessing vendors, mitigating risks, and ensuring AI procurement aligns with evolving legal and ethical standards. At one organisation I worked with, we formed a cross-functional team to create a responsible AI framework, uniting policy, regulatory, and data protection teams to ensure our AI initiatives were fair, accountable, transparent, and focused on positive outcomes.

Embracing the speed

I’m a bit of a speed geek, so let me put it this way: AI is like a Formula 1 McLaren. It delivers mind-blowing performance, but only when handled by a skilled driver (like Lando Norris), supported by an expert pit crew and engineers at the factory. It needs powerful brakes, robust safety measures, and well-defined rules of the track. And crucially, it needs an engaged audience—public input to shape its future.

AI governance is not a set of bureaucratic roadblocks—it’s the finely tuned safety and performance systems that allow us to go faster, with confidence, and with fewer crashes.

AI is not just another technological evolution; it is increasingly central to the future of economies and societies. I believe that our role is to create governance frameworks that don’t just mitigate risk but actively drive responsible, value-driven AI adoption. We must build trust, transparency, and capability—so that AI can serve as a force for good rather than a source of unintended consequences.

Let’s not be paralysed by fear, nor reckless in our ambition. Instead, let’s drive AI forward—safely, strategically, and at speed.