DOI: https://www.doi.org/10.53289/AVPW6690
Stephen Almond is Executive Director for Regulatory Risk at the Information Commissioner's Office (ICO). He leads the ICO’s teams charged with engineering information rights into the fabric of new ideas, technologies and business models as part of a dynamic digital economy, including through the Digital Regulation Cooperation Forum. Prior to joining the ICO, Stephen led a World Economic Forum initiative to promote the adoption of a more agile, innovation-enabling approach to regulation with governments and tech firms worldwide.
Summary:
Can Artificial Intelligence (AI) be regulated? As the regulator of course I would say yes, but apparently the Royal Society (where we are speaking tonight)’s motto is Nullius in Verba, or ‘don't take anybody's word for it’.
So, I won’t ask you to just take my word that AI can be regulated. Instead, I thought that the best way of illustrating why I believe that AI can be regulated is to talk through what it's like to regulate AI right now.
At the ICO (The Information Commissioners Office), we sit at the heart of AI regulation. AI is built on data, much of it personal data, and so (despite all of the media hype over the last year) the questions of how to regulate AI are not new for us. We have a fair bit of experience in how to regulate it and how to get things right.
Parallels of data protection and regulation of AI
The world of data protection law is principles-based and sets out a range of things that we should be thinking about when we're processing personal data. These are the same principles that you'll see in the government's white paper for how AI should be regulated.
You’ll hear the government talk about questions of fairness and bias or safety and security. Or about accountability and redress or transparency and explainability. These are all core features of how data protection law already governs AI.
I'm not here to try and persuade you that data protection law is the answer to how to regulate AI. It is very much just one part of this particular puzzle. We are seeing AI be used everywhere from entertainment to financial services to medicine. As a general purpose technology which is applied in lots of different contexts, it needs to be brought into conformance with what our expectations are for those activities, particularly the ones that are currently carried out by humans. Like all complex problems, the answer to the question of how to regulate AI is not simple, but I do believe it is close to what the government has already set out. It needs to rely on a framework of existing domain specific regulators and have a common set of approaches across those regulators. It needs to ensure that we as regulators are joined up, that we're not creating conflicting issues between us, and that there are no major gaps.
A year in the life of a regulator
So, what is it like to be a regulator in the age of AI?
Just over a year ago, I was sat with my horizon scanning teams who were putting out their annual report on the biggest technology trends that were going to be important for data protection in 2023. They said, ‘we think that we should put forward generative AI’ and I said ‘no, the issues there aren’t new’.
This might seem absurd after the year we’ve had, but I (mostly!) stand by my position that, just as AI is not new, the problems and the challenges of how to govern generative AI are largely not new either.
In many respects, for us as a data protection regulator, generative AI is simply another form of AI for us to respond to with our existing principles-based toolkit. We already have very comprehensive guidance on how organisations developing or deploying AI should be building in those core principles I mentioned earlier.
As winter turned to spring, we were quick to set out key guidance to the market on the sorts of things that we thought people should be taking note of, including our top tips for developers and deployers of generative AI.
In the summer, we followed this up with the launch of our Innovation Advice service, enabling organisations to get fast, frank feedback on their novel ideas. As you can bet, the first questions were all about generative AI.
At the same time as supporting innovators before they brought new ideas to market, we had to take the step of issuing warnings to firms that were not taking their existing regulatory responsibilities very seriously.
We advised that we would be knocking on doors, particularly on those organisations developing the most powerful models that sit at the very top of the food chain, and looking at their data protection impact assessments.
Come autumn, we announced that we had issued Snap, Inc. with a preliminary enforcement notice in respect of their ‘My AI’ chatbot that had been rolled out in Snapchat. We had concerns that the privacy risks surrounding this service that was being used by children had not been adequately identified and mitigated; we are now receiving representations from Snap, Inc. before a final decision is made.
The future of ICO AI regulation
Over the course of the last year, we have learned more about the unanswered questions that remain surrounding data protection law and generative AI, such as in what circumstances webscraping may be lawful. We’re determined to provide clarity to organisations, and at the start of 2024 we commenced a consultation series seeking to address these questions.
In tandem, we're going to continue to join up with others. Building on our consultation series, we're working with the Competition and Markets Authority to prepare a joint statement on how we are going to regulate foundation models together.
Through the Digital Regulation Cooperation Forum, we are developing a joint offering with other digital regulators to support innovators looking to bring new ideas to market that straddle our regulatory remits. Our new DRCF AI and Digital Hub will provide rapid response support to innovators who have questions around how the law applies.
We are going to continue to respond at pace to developments in the market. Just like at the start of last year, where my teams were saying to me, ‘we really need to lean into this generative AI trend’, my teams are now saying to me, ‘we need now need to really lean into personalised large language models’.
It is in this way that we will successfully regulate AI: by being responsive to the pace of technology, by setting principles rather than detailed rules, by engaging with the market to provide regulatory certainty, by taking action where we find serious noncompliance and by doing so in concert with our fellow AI regulators.