Viewpoint

Statistics are important tools in understanding a wide range of vital elements in our society. But they are not forever unchanging: they adapt to changing societal requirements.

DOI: https://www.doi.org/10.53289/FHFT8990

Keeping statistics meaningful and useful

Mike Keoghan

Mike Keoghan is Deputy National Statistician and the Director General of the Economic, Social and Environmental Group at the Office for National Statistics (ONS). Before this, Mike was the Chief Economic Adviser and Director of Analysis at the Department for Business, Energy and Industrial Strategy (BEIS). Between 2020 and 2021 Mike was also Acting Director General for Business Sectors at BEIS. During this time, he was responsible for leading the UK Government’s business policy, including the business readiness programme before the UK’s exit from the EU.

One of the great quotes attributed to Keynes is: “When the information changes, I alter my conclusions. What do you do, sir?” That pure insight into the nature of uncertainty is a reason why, in addition to Keynes’ many roles, he would have made a first-rate National Statistician.

When it comes to official statistics, revision is part of the business. Normally those revisions attract little public interest. Our website gets updated, we will state where and when figures have been revised. Academics will download the new series and analysts will rerun their models. All perfectly normal, routine behaviour in the life of a user of official statistics anywhere in the world.

That is because we all understand the uncertainty inherent in producing official statistics. There are always balances to be struck. Some estimates can be produced quickly with high levels of confidence: others will be more uncertain. Our job is to be transparent about that, and to continue to review and improve. As decisions need to be made by policymakers, businesses and households, we can all recognise the value of making those decisions with the best information available at that time.

There are periods when this becomes more difficult. A good current example is economic growth. Historically, when the UK economy grew by 0.3-0.5% every quarter, a 0.1% revision did not change our situational understanding. However, recently the economy has sometimes been broadly flat, so a small revision has the potential to dramatically change the perspective: being reported as pitching from recession to recovery and back again, yet with no real change to underlying economic performance.

The risk-averse approach would be to wait until the uncertainty resolved itself, waiting three months for the latest GDP figure. However, the UK is one of only two countries in the world that produces monthly GDP statements. Given we have that capability, we think it is right to share that information with policymakers and the public, even if that sometimes places us in an uncomfortable position as new data comes in.

This provides an example of ‘normal’ statistical uncertainty, albeit amplified by unusual economic times. However, there is also the challenge of dealing with uncertainty when changing the underlying statistics themselves. We have recently experienced this with our improvements to the Business Enterprise R&D Statistics.

It is worth recapping what the revisions did. The revision moved R&D spending by businesses in 2020 from £26.9 billion to £44.0 billion: a 64% increase. Josh Martin of the Bank of England calculates that moved R&D as a percentage of GDP from 1.7% to 2.4%, which elevated the UK from being one of the international R&D laggards to a comfortable mid-table position. Not only does that bring the UK into line with the Government’s stated ambition for R&D spend, but it also exonerates one of the prime culprits for the UK’s productivity puzzle and changes a significant aspect of recent UK economic history.

This has generated a lot of interest. It is worth reflecting on why the ONS made that change because it illustrates two aspects – validity and methodological change – of dealing with uncertainty in producing economic statistics.

At the beginning of 2022, Sir Ian Diamond, the National Statistician, asked the Office to look again at how we were measuring R&D. What we discovered was that the BERD survey, while methodologically sound and robust in terms of processing, had simply failed to keep track with underlying change in the economy. The survey was conceived in the late 1980s, in a world of industrial labs where R&D was undertaken by ICI or GEC. We were, as Professor Richard Jones put it, missing out on the ‘dark matter of [smaller firms] doing R&D’.

Validity

That is the first challenge: the issue of validity. The problem ONS discovered with R&D was that our survey was no longer valid; it was no longer representing its underlying population and so it needed to change.

The availability of rich HMRC microdata highlighted the need to improve the measurement of research and development conducted by small and medium-sized businesses. 

 

Ensuring the validity of our statistics is something that ONS has been methodically pursuing since Sir Charlie Bean’s review in 2016. Across a whole suite of measures, some of our statistics had become unmoored from the underlying economic base. There has been a huge amount of pathbreaking work since Bean to improve our statistics: we have dramatically improved our measures of the digital economy and implemented double deflation in the National Accounts, thereby giving better insights into which sectors were driving our growth. We have put in place a new regime for trade statistics following our exit from the European Union which allows us to capture micro-level trade that was previously obscured and we have just published fine-grained measures of local Gross Value Added, enabling much better understanding of how growth plays out spatially. By collaborating with the Economic Statistics Centre of Excellence (ESCOE), which was incubated at NIESR and now sits within Kings College London, we have been able to make a step change in our measurement of the economy.

Methods

The second challenge that the R&D reforms illustrate is the improvement in statistical methods themselves. New methods and new data sources mean that we can secure better estimates of the underlying economic phenomena. In the case of R&D, it was the availability of HMRC’s tax credit data that highlighted the widening discrepancy between measures. Although ONS did not use that data in producing the new estimates, the availability of the rich HMRC microdata did reinforce where the first stage of improvements should be focussed: namely, the measurement of research and development conducted by small- and medium-sized businesses.

Methodological improvement has not been confined to R&D. Over the next two years ONS will inject retail scanner data into the headline inflation measures. As well as the technological change of moving from thousands to billions of price points, the work has also required world-leading methodological developments. Similarly, in leaving the EU, ONS has had to move from a Europe-wide survey-based method for calculating trade to a new administrative-based system. Looking ahead, we will be overhauling the methods and collection of the Labour Force Survey (LFS) which is the Government’s single biggest survey after the Census.

All this change and innovation is exciting for ONS, but where does that leave users? The reality is that more valid statistics, better methods and richer sources of data will probably change our statistics: after all that is the point. That is what we saw with R&D – we now have our best-ever estimate, but it has changed our understanding.

That, though, leaves policymakers and other statistical users in a difficult position. We know how to handle the ‘normal’ uncertainty of statistical revisions: the ONS provides very clear quality information including confidence intervals for surveys. Experienced users also know the cadence of review and revision. However, what happens (as we saw with R&D) when the improvements leave the resulting revisions well outside what users would regard as ‘normal’? How can we support users of trade, GDP, prices, labour force, crime, productivity and other datasets as our investments begin to come on stream?

Scientific method

Scientific method provides part of the answer here. We have to be open, transparent and submit ourselves to peer review so we can bring users along with us, but also so they can contribute to our improvements. ESCOE is part of this, along with our collaborations with the Turing Institute, the Bank of England, Southampton University and our ONS Fellows, among others. In the R&D case, the ONS produced a series of articles and bulletins, met with stakeholders and users, presented at conferences and submitted the work to peer review.

However, our experience of R&D also tells us that robust application of scientific method is necessary but not sufficient for handling the uncertainty in renewing our statistics. For that to really work, helping users navigate the uncertainty and building trust, we need a better way of communicating the size and implications of our changes. We will work even more closely with our users to understand how we can provide clearer guidance about forthcoming changes. By doing this, we will enable our policymakers and analysts to understand the implications of change, manage the downsides of uncertainty and further enhance the reputation of UK official statistics.