Five years ago, on 9 March 2016, the world watched as Lee Sedol, an 18-time world champion of the board game Go, took on AlphaGo, a computer programme developed by DeepMind. The contest was broadcast live and 60 million viewers watched it in China alone. Powered by artificial intelligence (AI) technologies, AlphaGo went on to win all but one of the five games that it played against Lee Sedol. It was a resounding victory of an AI system over a human brain, made even more spectacular by the fact that it happened so soon: many AI experts believed that computers were years away from beating top performing humans.
News of Lee Sedol’s loss to AlphaGo dominated the front pages at the time. Since then, hundreds of thousands of articles have been written about AI outperforming humans in tasks ranging from medical diagnosis to combat simulations. These news stories have created enormous hype around AI. No other technology captures the public’s imagination as much as artificial intelligence.
The hype surrounding AI is built on a narrative of humans vs machines -- a competition where the future of humanity is thought to be at stake. It is a captivating story, but we believe that the hype is misplaced. Humanity stands to gain so much if, instead of seeing the advancement of AI as a race against the human brain, we reframe it as a collaboration. The most valuable question that we can ask of AI technologies is how human and artificial intelligence can complement each other in pursuit of the public good.
To answer that question, we must move away from the flashy headlines about game-playing computers, dancing robots, and driverless cars. Instead, we must return to a less glamorous, but more accurate portrayal of AI as a field of research anchored in statistics and capable of generating data-driven insights. And if the pursuit of the public good is our ultimate aim, the priority must be to identify how AI can help the public sector -- the very group of organisations tasked and entrusted with looking after the public good.
AI’s ability to distill insights from vast quantities of data can usher in a new era of policy-making. At The Alan Turing Institute, we identified five ways in which AI could complement human intelligence and by doing so, revolutionise decision-making processes in the public sector:
Simulation and evaluation. Policy-making is a complex process, characterised by high degrees of interdependency and uncertainty. During the Covid-19 crisis, for example, we saw firsthand how policies aimed at tackling health outcomes had a knock on effect on virtually every other policy area -- from education and law enforcement to the economy and the environment. The human brain is poorly equipped to identify and analyse these interdependencies, especially in situations where uncertainty is rife. But this is precisely where AI can help. Modelling methodologies like agent computing can capture the complexities of our interdependent world. Academic research is moving towards a day when we can build digital replicas of our economies and societies: a virtual lab where policy-makers can simulate and evaluate the effect of proposed policy measures. Coupled with statistical techniques to quantify uncertainty, these modelling efforts have huge potential to augment human intelligence. They endow human decision makers with the ability to finetune policies and reduce harmful effects before policy measures get implemented in the real world.
Measurement and detection. Policy-makers have a tradition of relying on official statistics to measure and detect real world phenomena. Data sets released by national statistics offices take months and sometimes even years to compile and verify. Today’s world, however, is fast moving and policy-making processes must adapt to keep up. Part of that adaptation process is learning to use the quintillion bytes of data that we generate each day. This is where AI can help. Techniques such as machine learning can accomplish what our human brains struggle to do: sift through massive quantities of data to measure and detect phenomena as they happen. When AI and humans join forces on measurement and detection tasks, a world of opportunity opens up. This is already evident in areas such as online harms, where the performance of models to detect hate speech online improves significantly when human annotators work alongside state-of-the-art AI models.
Prediction and forecasting. Prediction and forecasting are the cornerstone of policy-making processes. When the government invests in a new school, hospital, or powerplant, for example, predictions about future demand and supply play a key role. AI can make such predictions and forecasts more accurate and reliable. Furthermore, AI can generate individual-level predictions. It can, for example, help regulators predict which restaurant is likely to fail a future food safety inspection, or which water pipe is likely to develop a crack. If humans and AI work alongside, policy-makers will have better insight into what the future holds and be able to anticipate where problems will arise.
Personalisation. Private companies are using AI to personalise products and services. Platforms like Netflix, for example, use machine learning algorithms to personalise the movie recommendations that each user sees. The public sector could benefit enormously from employing these technologies. Healthcare could improve substantially through the development of AI technologies that personalise treatment plans. Likewise, social care can receive a much needed boost from machine learning approaches that personalise the support packages that citizens receive. A collaboration between humans and AI in the area of personalisation could usher in a new era of public service provision, where resources are allocated in a fair and transparent way and government support is tailored to people's individual needs and situations.
Ethics and governance. The applications of AI described above can give rise to serious ethical issues. The use of machine learning in criminal justice or in child protection, for example, has been rightly criticised for a wide range of ethical violations, ranging from privacy invasion to unlawful discrimination. These challenges, however, need not discourage us from using AI. Instead, they must motivate us to do better. Ethics has to be an integral part of the science behind AI, especially so when designing, developing, and deploying AI systems for the public sector. If ethics becomes part of the science, the collaboration between AI and humans could help us tackle some of our societies’ toughest challenges. Bias and discrimination, for example, have plagued our societies since the dawn of times. Humans make biased decisions. AI systems can learn these biases from us, but if we make ethics part of the science, instead of replicating our biases, AI can help us uncover and address them. Reducing human biases in decision-making processes could be AI’s most important contribution to the public good -- if we, the humans involved, make that a priority.
We can dream of a day when the hype around AI revolves around its ability to make millions of people’s lives better. A day when the front pages are dominated by stories of humans and AI working together to improve the way we govern the world. At The Alan Turing Institute, we are striving to make this dream a reality. Three years ago, we set up a public policy research programme, which works alongside the government to improve policy-making with AI. We wrote the UK government’s official guidance on the ethical use of AI technologies for the public sector and we recently launched a large programme of work on shocks and resilience. Our aim is to build models that help policy-makers develop a rigorous understanding of societal responses to shocks and a clear strategy for how to engender policy resilience. It is a step forward towards changing the narrative of AI. Through our work, we hope to help shift this narrative from humans racing against AI to humans collaborating with AI in pursuit of the public good.
Cosmina Dorobantu is the Deputy Director of The Alan Turing Institute’s public policy research programme. Helen Margetts is Professor of Society and the Internet at the Oxford Internet Institute, University of Oxford, and the Director of the Turing's public policy programme. The programme is home to 70+ researchers in data science and artificial intelligence and has helped more than 80 public sector organisations take advantage of the latest generation of data-intensive technologies. Interested readers can contact the programme at email@example.com.