Governing AI for humanity

There have been several developments at both an international and UK level exploring how best we can govern and regulate AI, which is developing rapidly with exciting new opportunities but also potential threats emerging. In September 2024, the United Nations High Level Advisory Body on AI published its final report, Governing AI for Humanity. This notes the urgent need for global governance, and the current inequity in representation in such governance. It has several recommendations including policy dialogue, capacity development, a global AI data framework and a global fund for AI. Delivering any of these recommendations requires global co-operation. In the UK, the government published its AI Opportunities Action Plan on 13 January 2025. On Wednesday 29th January 2025, we hosted an evening discussion at The Royal Society to explore what needs to happen at a global level, the UK’s approach, domestically and internationally and how we can maximise the benefits whilst minimising the risks. Our panel of expert speakers included Dr Douglas Gurr, Director of the Natural History Museum and Chair of The Alan Turing Institute; Professor Dame Wendy Hall DBE FRS FREng Regius Professor of Computer Science at the University of Southampton, and Member of the UN High Level Advisory Board on AI; Adrian Joseph OBE, Board Member and AI Advisor (DirectLine Group, National Lottery, GOSH and Natwest) and former Chief Data and AI Officer at BT Group; and with Feryal Clark MP, Parliamentary Under-Secretary of State for AI and Digital Government, joining the panel for the discussion period.

DOI: https://www.doi.org/10.53289/PHXL1099

AI- how we got here

Volume 24, Issue 1 - September 2025

Dame Wendy Hall, DBE FRS FREng

Dame Wendy Hall, DBE FRS FREng

Dame Wendy Hall, DBE FRS FREng is Regius Professor of Computer Science, Associate Vice President (International Engagement) and is Director of the Web Science Institute at the University of Southampton. She became a Dame Commander of the British Empire in the 2009 UK New Year's Honours list and is a Fellow of the Royal Society, the Royal Academy of Engineering and the ACM. Dame Wendy was co-Chair of the UK government’s AI Review, which was published in October 2017, and a member of the AI Council. She is currently the co-Chair of the ACM Publications Board and Editor-in-Chief of Royal Society Open Science. She is an advisor to the UK government and many other governments and companies around the world and in 2023 was appointed to the United Nations high level advisory body on artificial intelligence. Her latest book, Four Internets, co-written with Kieron O’Hara, was published by OUP in 2021.

  • What we are witnessing now is not something entirely new, but rather a significant evolution in how AI is developing
  • It's important to note that China has long been an active player in this domain, not just trying to catch up, and they have enacted several notable laws
  • I believe generative AI does not present an immediate existential threat, although future advancements may require ongoing scrutiny
  • My primary concern now lies in not just regulating AI, but also in the broader implications for the internet itself.

I do not have time to go through the full history of AI, but I will say that while many like to believe that AI's history began with Alan Turing’s work in the UK in the 1950’s, AI has gone through multiple reincarnations between then and where we are today. What we are witnessing now has not just emerged out of the blue, but is a profound tipping point in the evolution of AI and how it is perceived by society at large. In longer discussions, I touch on the AI winters - periods of stagnation, when AI fails to deliver on promised impact but currently, I feel like we are in an AI blazing summer and none of us know how it is going to turn out. 

National Strategies

I first got involved in the sector when Jerome Pesenti and I were asked to undertake the UK review of AI by Theresa May and her government in 2017. This was all about economic growth and job creation - words we hear a lot today. I then worked with Greg Clark, then Secretary of State for BEIS, as our review was incorporated into the government’s Industrial Strategy. The result was a billion-dollar investment in AI by the government which included the establishment of the Office for AI, the AI Council and led to the development of the UK’s National Strategy for AI in 2021. Investment in AI then came through a series of successive Conservative governments, including when Rishi Sunak was chancellor. He authorised a considerable amount of funding for AI, and we were really on the front foot internationally.

 

Other countries began to follow our direction of travel and at the same time the regulation of AI started to appear in various national AI strategies. 2021 was when the EU started laying the groundwork for their AI Act, building on GDPR. China was, and has always been, in this game. It is important to note that they are not playing catch up and that they have passed a lot of interesting laws to regulate AI, albeit having a very different way of dealing with content. 

 

Back in the UK the office for AI were overseeing the adoption of the national AI strategy and implementing the recommendations we made in the 2017 review. This was a pivotal moment as the UK was one of the first nations to adopt a national AI strategy.  The  EU was meanwhile putting the finishing touches to their AI Act and trying to persuade the US to adopt it. That discussion is now history as the playing field shifted quite dramatically. In November 2022, Sam Altman, Chief Executive of Open AI very cleverly created a very user friendly interface to their large language model (LLM) GPT to create ChatGPT. Now anybody could interact with AI. All of a sudden, over Christmas 2022, everybody from government ministers through to the media and the general public, was playing with Chat GPT without really understanding what they were doing, but it felt like they were talking to something intelligent because of the easy natural language interface and the answers in prose. 

 

In March 2023,  the UK published its well-intentioned pro-innovation AI regulation white paper, but it gained little attention. Perhaps it was not the right time to produce something like this during the ongoing debate around ChatGPT. We then swiftly moved into an era when everyone was talking about the risks of generative AI becoming an existential threat to humanity. Scientifically, generative AI is never going to be an existential threat in terms of going rogue. It could do that in the future as the technology evolves, so I applaud the work of the UK AI Safety Institute. However, I want them to look more broadly rather than narrowly at the US Generative AI models. I hope they are also looking at Chinese models. 

 

Judgment Day

 

The existential threat meme became very dominant in 2023. It was over hyped by the technology companies and picked up by the media in a way that was very scary for people. Geoffrey Hinton, the Nobel Prize winning Computer Scientist, said words to the effect of “I'm leaving Google because it is all too dangerous”. This is the man who invented it all. I really felt that this was not the right thing to say but I think he was saying, “I want to be free to say what I want and I do not want to be constrained by being employed by one of the big tech companies.”. He has tempered his remarks since, and been a bit wiser about things. However, it was a statement that the media picked up on and contributed to a dangerous rhetoric.

 

In October 2023, the UN AI advisory body on AI was set up. I was privileged to be a member. We had less than a year to produce a report about how the world should set up some form of global governance of AI. In November 2023 the UK hosted the first AI Safety Summit at Bletchley Park in association with then President Joe Biden, which included significant discussions with prominent tech leaders and companies heavily involved in the development of AI including China. Around the same time, China was actively launching its AI framework through the Belt and Road initiative, showcasing it’s growing influence in technology and AI development worldwide. The shift in global power dynamics is considerable, as China is providing funding and resources to assist other countries in their AI endeavours, outpacing Western efforts in some areas. 

The UK, the US and the EU

Just before the UK AI Safety Summit President Joe Biden announced his executive order calling for self-regulation by the big tech companies. While both countries appeared to work collaboratively, underlying disagreements persisted regarding AI governance strategies and tensions became clearer.

 

The UN HLAB on AI report “Governing AI for Humanity” that I was a part of, was released in September 2024 and was largely accepted by the UN General Assembly that month.  If this is implemented it will lead to the formation of a global scientific panel aimed at establishing unified standards and policies for AI governance, similar to historical nuclear treaties. The report proposed the creation of a global AI capacity development network and an accompanying fund to support AI initiatives in developing regions, which are often referred to as the "Global South." Meanwhile, the EU announced funding for an AI research initiative akin to CERN, with the goal of fostering significant innovation in European AI capabilities. As 2025 began, the new UK government released its AI Opportunities Action Plan highlighting key initiatives such as creating AI growth zones and an AI Energy Council to address the energy demands of AI technologies. Recently, however, political shifts have complicated the landscape, with expressions of concern over the potential revocation of Biden's executive order by President Trump and substantial investments from tech giants into AI development in the US with who knows what regard to the safety of the technology. Looking ahead, as we prepare for the next summit in France, my primary concern lies in not just regulating AI, but also in the broader implications for the Internet itself. If we do not approach these challenges responsibly, we risk turning the Internet into a dysfunctional space. 

The future of the Internet and AI can either be a catastrophe or a significant advancement for us—it is crucial that we engage thoughtfully in this discourse. Let us hope we can steer it in a positive direction.

 

Footnote: Neither the US nor the UK signed the agreement that emerged from the Paris AI Summit in February 2025. We were told that the UK didn’t sign because the agreement didn’t say anything about safety and security. The next AI summit will be in February 2026 in New Delhi in India.