AI Strategy

DOI: https://www.doi.org/10.53289/MAIC8300

Creating the best framework for AI in the UK

Tim Clement-Jones

Lord Clement-Jones CBE was made a life peer in 1998. He is Liberal Democrat House of Lords spokesperson for Digital and a former Chair of the House of Lords Select Committee on AI. He is Co-Chair of the All Party Parliamentary Group on AI, a founding member of the OECD Parliamentary Group on AI and a consultant to the Council of Europe’s Ad-hoc Committee on AI. He is also Chair of Council of Queen Mary University London, a consultant to global law firm DLA Piper and President of Ambitious about Autism.

Summary

  • AI is becoming embedded in everything we do
  • We should be clear about the purpose and implications of new technologies
  • There is a general acceptance of the need for a risk-based ethics regulatory framework
  • The Humanities will be as important as STEM in the development of AI
  • Every child leaving school should have an understanding of the basics of AI.

A little over five years ago, the Lord's AI select committee began its first inquiry.  The resulting report was titled: AI in the UK: ready, willing and able?  About the same time, the independent review Growing the Artificial Intelligence Industry in the UK set a baseline from which to work.

There will always be something of a debate about the definition of artificial intelligence.  It is clear though that the availability of quality data is at the heart of AI applications.  In the overall AI policy ecosystem, some of the institutions were newly established by Government, some of them recommended by the Hall review.  There is the Centre for Data Ethics and Innovation, the AI Council and the Office for AI.  Standards development has been led by the Alan Turing Institute, the Open Data Institute, the Ada Lovelace Institute, the British Standards Institution and the Oxford Internet Institute, to name just a few. 

Regulators include the Information Commissioner’s Office, Ofcom, the Financial Conduct Authority and the Competition & Markets Authority, which have come together under a new digital regulators’ cooperation forum to pool expertise.  The Court of Appeal has also been grappling with issues relating to IP created by AI.  Now regulation is not necessarily the enemy of innovation.  In fact, it can be a stimulus and is the key to gaining and retaining public trust around AI, so that we can realise the benefits and minimise the risks.  Algorithms have got a bad name over the past few years. 

I believe that AI will actually lead to greater productivity and more efficient use of resources generally.  However, technology is not neutral.  We should be clear about the purpose and implications of new technology when we adopt it. Inevitably, there are major societal issues about the potential benefit from new technologies.  Will AI better connect and empower our citizens improve working life? 

In the UK, there is general recognition of the need for an ethics-based regulatory framework: this is what the forthcoming AI Governance white paper is expected to contain.  The National Strategy also highlights the importance of public trust and the need for trustworthy AI. 

We should be clear about the purpose and implications of new technology when we adopt it. Will AI better connect and empower our citizens?

The legal situation

The Government has produced a set of transparency standards for AI in the public sector (and, notably, GCHQ has produced a set of AI ethics for its operations).  On the other hand, it has also been consulting on major changes to the GDPR post-Brexit, in particular a proposal to get rid of Article 22, the so-called ‘right to explanation’ where there is automated decision making (if anything, we need to extend this to decisions where there is already a human involved).  There are no proposals to clarify data protection for behavioural or so-called inferred data, which are the bedrock of current social media business models, and will be even more important in what has been described as the metaverse.  There is also a suggestion that firms may no longer be required to have a Data Protection Officer or undertake data protection impact assessments. 

We have in fact no settled regulation, or legal framework, for intrusive AI technologies such as live facial recognition.  This continues to be deployed by the police, despite the best efforts of a number of campaigning organisations and even successive biometrics and surveillance camera commissioners who have argued for a full legal framework.  There are no robust compliance or redress mechanisms for ensuring ethical, transparent, automated decision-making in our public sector either. 

It is not yet even clear whether the Government is still wedded to sectoral (rather than horizontal) regulation. The case is now irrefutable for a risk-based form of horizontal regulation, which puts into practice common ethical values, such as the OECD principles. 

There has been a great deal of work internationally by the Council of Europe, OECD, UNESCO, the global partnership on AI, and especially the EU.  The UK, therefore, needs a considerable degree of convergence between ourselves, the EU and members of the Council of Europe, for the benefit of our developers and cross-border businesses, to allow them to trade freely.  Above all, this means agreeing on common standards for risk and impact assessments alongside tools for audit and continuous monitoring for higher-risk applications. In that way it may be possible to draw the USA into the fold as well.  That is not to mention the whole defence and lethal autonomous systems space: we still await the promised defence AI strategy. 

We have no settled regulation, or legal framework, for intrusive AI technologies such as live facial recognition.

 AI skills

AI is becoming embedded in everything we do.  A huge amount is happening on supporting AI specialist skills development and the Treasury is providing financial backing.  But as the roadmap produced by the AI Council itself points out, the Government needs to take further steps to ensure that the general digital skills and digital literacy of the UK are brought up to speed. 

I do not believe that the adoption of AI will necessarily make huge numbers of people redundant. But as the pandemic recedes, the nature of work will change, and there will be a need for different jobs and skills.  This will be complemented by opportunities for AI, so the Government and industry must ensure that training and retraining opportunities take account of this.  The Lords AI Select Committee also shared the priority of the AI Council roadmap for diversity and inclusion in the AI workforce and wanted to see much more progress on this. 

But we need however, to ensure that people have the opportunity to retrain in order to be able to adapt to the evolving labour market caused by AI.  The Skills and Post-16 Education Bill with the introduction of a lifelong loan entitlement is welcome but is not ambitious enough.

A recent estimate suggests that 90% of UK jobs within 20 years will require digital skills.  That is not just about STEM skills such as maths and coding.  Social and creative skills as well as critical thinking will be needed. The humanities will be as important as the sciences, and the top skills currently being sought by tech companies, as the University of Kingston's future league table has shown, include many creative skills: problem solving, communication, critical thinking, and so on.  Careers advice and Adult Education likewise need a total rethink. 

We need to learn how to live and work alongside AI.  The AI Council roadmap recommends an online academy for understanding AI.  Every child leaving school should have a basic sense of how AI works.  Finally, given the disruption in the job market, we need to modernise employment rights to make them fit for the age of the AI- driven gig economy, in particular by establishing a new dependent contractor employment status, which fits between employment and self-employment.