The surprising role of voluntary standards in AI regulation

  • 23 February 2024
  • Business, Technology
  • Dr Matilda Rhode

The upcoming Foundation of Science and Technology event asks “Can AI be regulated and if so, how?” Regulation is one of the key tools that governments can use to compel private sector behaviour and AI has rapidly become focal topic when it comes to the impact of private sector technologies on society.

AI technology applications are wide-ranging, from self-driving cars to social media feed curation, to say nothing of the potential applications not yet present in the market. On top of that, “AI” covers a number of different technologies and may even refer to a wider system for which AI is just a component. To handle the complexity of mapping ethical principles to technological implementation, several regulatory initiatives are pointing to standards to light the way.

Standards are voluntary codes of best practice, usually written by experts. Relying on standards development bodies taps into an existing ecosystem of convening those who understand the challenges involved in building and deploying the technologies. Public AI debates have highlighted the need to consider wider public concerns in the development of AI regulation - this is already baked into some standards development processes. For example, the UK national standards body, BSI, has a Consumer and Public Interest Network sitting on the standards development committees and wider stakeholder participation is a continuing goal. 

Global roles of standards in AI regulation

The recent update to the UK government’s pro-innovation approach to AI regulation reaffirmed 5 principles for regulators to follow. The implementation guidance for regulators1 document points to 17 international standards as well as government guidelines to help realise these principles. 

The EU AI Act also uses standards to provide the implementation details of its key principles and requirements, where conformity to the standards will demonstrate conformity with the AI Act. The standards supporting the EU AI Act have not yet been associated with the legislation, but work is ongoing.

Brazil has produced several pieces of AI legislation and points to international standards in Bill 21 of 20202 to map principles to technical measures. 

There are additional international regulatory initiatives which do not explicitly mention international standards. Regulation is mandatory, standards are voluntary – so, in cases where standards are not cited by regulations, why follow them?

Crossing jurisdictions 

AI and digital technologies are built on global international supply chains and deployed internationally too. Regulation is bounded by jurisdiction, but international standards are a commonly agreed way to operate between administrations.

AI models may be built in a research lab in China using US hardware and international data sources labelled by globally distributed multi-lingual annotators. Imagine you use a free large language model from Mexico; your query may be sent to be processed in a data centre in Arizona and then automatically sent to research teams in France (depending on your operating agreement) in a matter of seconds. 

International standards have the advantage of being developed through consensus-driven processes by national and then international committees. International standards bodies include the International Standards Organisation (ISO), the International Electrotechnical Commission (IEC) and the Institute of Electrical and Electronics Engineers (IEEE) as well as European standards bodies like CEN/CENELEC and ETSI.

Looking ahead

The AI standards corpus is now large and growing. It covers topics including bias, robustness, data quality, testing, ethics, governance, and procurement requirements. BSI partners with the National Physical Laboratory and the Alan Turing Institute to deliver the AI Standards Hub, a central resource to explore the evolving AI standards landscape and related content.

As key standards are published, for management and developers alike, the next steps become clear. Skills and education around AI and it’s risks are critical to ensuring that we shape the nature of the technologies we deploy. Developers need safety tools that are as easy to deploy as the technologies themselves and relief from market pressures to release products as soon as they are working. Shared resources and communities are springing up to support these activities and we hope to see continued investment in AI safety as well as AI innovation.