There have been several developments at both an international and UK level exploring how best we can govern and regulate AI, which is developing rapidly with exciting new opportunities but also potential threats emerging. In September 2024, the United Nations High Level Advisory Body on AI published its final report, Governing AI for Humanity. This notes the urgent need for global governance, and the current inequity in representation in such governance. It has several recommendations including policy dialogue, capacity development, a global AI data framework and a global fund for AI. Delivering any of these recommendations requires global co-operation. In the UK, the government published its AI Opportunities Action Plan on 13 January 2025. On Wednesday 29th January 2025, we hosted an evening discussion at The Royal Society to explore what needs to happen at a global level, the UK’s approach, domestically and internationally and how we can maximise the benefits whilst minimising the risks. Our panel of expert speakers included Dr Douglas Gurr, Director of the Natural History Museum and Chair of The Alan Turing Institute; Professor Dame Wendy Hall DBE FRS FREng Regius Professor of Computer Science at the University of Southampton, and Member of the UN High Level Advisory Board on AI; Adrian Joseph OBE, Board Member and AI Advisor (DirectLine Group, National Lottery, GOSH and Natwest) and former Chief Data and AI Officer at BT Group; and with Feryal Clark MP, Parliamentary Under-Secretary of State for AI and Digital Government, joining the panel for the discussion period.
Following the presentations, the panel discussed a wide range of issues in response to questions from the in-person and online audience. Some of they key points raised are summarised below.
Under the previous government (in 2023), there was a ‘lively’ debate on key issues for AI. The decision was taken that the number one issue was safety. More recently, the NHS has been in the spotlight. There is a surprising number of thinFgs you can do if you stay ahead of what is happening internationally. One audience member asked, what can we expect to see happening next?
The minister said that the Government believes safety and opportunities are not at odds. They are two sides of the same coin. You cannot make use of AI opportunities unless you have safety baked in from the beginning. We have the AI playbook which sets out steps for every department to go through when using AI in public services. There is also research going into societal harms with regards to AI. She said that the work we do with our academic sector is going to be key in making sure we keep a good understanding of upcoming threats and safety issues.
Another pannelist commented that setting up the AI Safety Institute was a good initiative, but that it was too narrowly focused on the existential threat from foundation models. It needs to broaden out and start being part of the debate on things like responsibility frameworks. She said that the UK could drive this debate. Any new regulations need to consult scientists who can see what is coming down the pipeline.
In 2023, the UK published guidelines for safe and ethical use of AI which had to apply to every regulatory body. There is a lot there to build frameworks out from.
With regards to helping people to manage their data and privacy, the concept of ‘Data Trusts’ (managed by third parties who negotiate with companies on an individual’s behalf) was advocated for by the panel. This could be done with healthcare data and in particular, NHS data. However, this cannot be achieved without the idea of data trust and data stewardship.
Looking at the Chinese model or regulation of AI and the internet is interesting and useful. Exploring the concept of the ‘four internets’ – that there are in fact parallel versions of the internet to the standard US type model we see regularly in this country, and these are worth looking at with regards to how AI will operate in the future.
We should be mindful of how we regulate and put restraints on AI in the UK so as not to drive business elsewhere. We need to be more thoughtful about a wider set of skills needed for the development and training of AI. We should look at things holistically, particularly in terms of safety and Open-source models. These can speed up development but are also very ‘open’ to bad actors. It is important to think carefully about the regulation of Open-source models.
One panellist stated that one of the biggest existential risks for the UK is not adopting technology early enough and more money and opportunities going to big tech companies outside of the country. There is also an undercurrent feeling here in the UK that we spend a lot of time using AI that is developed overseas and that more should be developed here in the UK. However, investment is low. One factor is that venture capitalists (VC) in the UK do not seem to understand science properly. The Government have started a programme where they try to help VCs train and get a better understanding of STEM. There is also a view that the US company Palantir winning the contact for the NHS data framework was a missed opportunity for UK businesses.