There have been several developments at both an international and UK level exploring how best we can govern and regulate AI, which is developing rapidly with exciting new opportunities but also potential threats emerging. In September 2024, the United Nations High Level Advisory Body on AI published its final report, Governing AI for Humanity. This notes the urgent need for global governance, and the current inequity in representation in such governance. It has several recommendations including policy dialogue, capacity development, a global AI data framework and a global fund for AI. Delivering any of these recommendations requires global co-operation. In the UK, the government published its AI Opportunities Action Plan on 13 January 2025. On Wednesday 29th January 2025, we hosted an evening discussion at The Royal Society to explore what needs to happen at a global level, the UK’s approach, domestically and internationally and how we can maximise the benefits whilst minimising the risks. Our panel of expert speakers included Dr Douglas Gurr, Director of the Natural History Museum and Chair of The Alan Turing Institute; Professor Dame Wendy Hall DBE FRS FREng Regius Professor of Computer Science at the University of Southampton, and Member of the UN High Level Advisory Board on AI; Adrian Joseph OBE, Board Member and AI Advisor (DirectLine Group, National Lottery, GOSH and Natwest) and former Chief Data and AI Officer at BT Group; and with Feryal Clark MP, Parliamentary Under-Secretary of State for AI and Digital Government, joining the panel for the discussion period.
DOI: https://www.doi.org/10.53289/GMBU5116
Doug is Director of the Natural History Museum. He is also Chair of the Alan Turing Institute and Interim Chair of the Competition and Markets Authority. Previously, Doug was Country Manager of Amazon UK and President of Amazon China. Earlier roles included the civil service, partner at McKinsey and Company, Director at Asda-Walmart, Founder and CEO of internet start-up Blueheath, Chair of the British Heart Foundation and Chair the Science Museum Group. He has degrees in Mathematics from the University of Cambridge and a PhD in Computing from the University of Edinburgh, and previously taught mathematics and computing at the University of Aarhus in Denmark.
Summary:
I am going to bring us down from global regulation to the practical realities of AI—what is really happening at the coalface. I will explore what AI is, how it creates value, what can go wrong, and how we might think about regulating it appropriately.
What is AI?
Think of any organisation—whether it is a university, a commercial company, a charity, or even The Royal Society. You can view an organisation as a decision-making machine, making numerous decisions every day. Fundamentally, AI is just a sophisticated decision-making tool. It takes inputs, which we usually convert into numerical forms (bits and bytes), applies algorithms, and produces outputs. That is essentially its function: taking inputs and generating outputs. I often ask business leaders: How does AI actually create value? What can AI do to add value to your organisation, whether that is social, economic, or commercial? By considering AI as part of an organisation’s decision-making process, we can understand that decisions can be made by humans, randomly, or by machines. When discussing the value of AI, there is often a misconception. In any decision-making scenario, there are two key dimensions to consider: fidelity (the quality of decisions) and velocity (the speed of decisions). Many debates around automation and replacing human workers with AI assume that AI creates value by making better decisions. For example, we have AI systems that can analyse skin images to identify potential cancer better than even some expert oncologists. While that sounds promising, the real issue often is not whether AI improves decision quality; it is about how much faster decisions can be made.
We are talking about speed increases that can reach billions of times faster, creating significant value in many domains.
How AI can create Value
Let me share three simple examples.
1. Retinal eye scans: These scans can indicate serious health conditions like diabetes and cancer. However, there are not enough human experts to analyse the high volume of scans; only a small fraction get reviewed. If we use an AI algorithm to assess all those scans, even a minimally effective one could identify the small percentage that needs further examination. This could create immense value simply by increasing the speed of analysis.
2. Weather Forecasting: At the Alan Turing Institute, we have partnered with the Met Office to enhance weather predictions. Weather forecasting is complex; small changes in initial conditions can create vastly different outcomes. Traditional models, based on fluid dynamics, struggle with local accuracy. However, by using physics-constrained machine learning, we can gain better predictive capabilities, even in areas lacking data. In tests against existing supercomputers, we found that our models could match their accuracy while being a million times cheaper and faster. This means that, for the first time, more people can access advanced forecasting powered by AI, which is a game-changer. These examples highlight how AI's true value lies in its ability to speed up decision-making and improve granularity, transforming not only individual organisations, but also broader societal functions.
3. Analysing fossils: My day job involves overseeing the Natural History Museum, and one intriguing question we often encounter is: how do we date a dinosaur fossil? This question is vital for palaeontologists and anyone interested in deep time, yet it is surprisingly complex. Fossils are essentially just different types of stone, and they all look quite similar, which makes dating them over a span of 200 million years challenging. However, we can date fossils by collecting a small sample from the surrounding substrate—be it chalk or sandstone. This sample often contains nanofossils, like pollen and plankton, which serve as reliable indicators of time because they evolve and change over periods.
With careful analysis, we can achieve good dating accuracy. To do this effectively requires a trained postdoc, whom we will call Tom. He needs to make around 2,000 observations, meaning he would spend about ten days looking through a microscope for hours at a time. Unsurprisingly, it is a monotonous job! To tackle this inefficiency, we decided to leverage AI by pairing Tom with one of our machine learning experts. They developed a straightforward model that analyses images of the samples. In just four to five weeks, they created a model with 98.5% accuracy that processes data 30,000 times faster than Tom can.
Now we are preparing to offer this as a commercial service at a competitive price.
AI threats
On a different note, I want to address a more pressing issue: the notion that AI poses an existential threat. I believe this idea is overstated and diverts our attention from more immediate societal concerns. For example, many organisations are now using machines to make decisions. Today, machines handle billions of tasks, such as credit checks, inventory management, and pricing. During my time at Amazon, we recognized the importance of understanding how to manage these machines effectively. While we have over a century of experience managing people, our understanding of machine management is still developing. Machines can fail due to poor data or outdated algorithms, leading to rapid, sometimes catastrophic outcomes. Unfortunately, many people who implement these systems lack the proper training to manage them effectively, which can be dangerous.
Additionally, we must consider who benefits from the value generated by AI systems. In the UK, we have some of the world's most valuable datasets, often provided for free to businesses. While this might seem advantageous, it poses the question of whether taxpayers should subsidise these resources, especially when much of the value created does not benefit the UK. Trust also remains a significant concern in AI. We need to think critically about where and how to involve humans in decision-making processes. Even if machines perform better in certain areas, human oversight is sometimes necessary.
Lastly, I want to highlight a concern that keeps me up at night: the accessibility of advanced technology to malicious actors. During my time managing Amazon's operations in China, I saw how organised crime can exploit these capabilities. This risk is often overshadowed by concerns about state actors, but it is crucial to recognise this threat as we advance machine learning technologies. In conclusion, regulating AI and related technologies is essential, but it presents considerable challenges. It requires a thoughtful approach that balances genuine societal concerns with the need to foster innovation. The region or country that successfully navigates this balance will likely attract significant investment and growth. Therefore, it is vital that we address these legitimate issues while maximising opportunities for innovation.