The UK government must focus on the relationships between different digital technologies when assessing their potential risks, which would help avoid the “siloisation” of fundamentally interconnected problems, expert witnesses have told a House of Lords committee.
Giving evidence to the Risk Assessment and Risk Planning Committee – which gathered virtually on 13 January to hear about the most significant technological risks facing society today – a number of witnesses told the Lords that the government’s current risk assessment framework failed to account for the interplay between different technologies, and therefore the full scope of the risks they represented.
Financial Times innovation editor John Thornhill, for example, said that emerging digital technologies, from drones and facial recognition to quantum computing and cryptography, “can combine in ways that we cannot foresee or anticipate”, making it difficult to prepare for their potentially negative consequences.
However, Stephen Cave, executive director of the Leverhulme Centre for the Future of Intelligence, said the government currently has a tendency to separate potential risks into “specific, discrete areas”, as it has done with cyber attacks and disinformation in its most recent National Risk Register.
The register is a public version of the government’s annual national risk assessment process, which aims to identify and compare all threats of national significance that could affect the UK within a five-year horizon. Led by the civil contingencies secretariat of the Cabinet Office, the process takes a multi-agency approach to ranking risks based on the likelihood and impacts of the “reasonable worst-case scenario”.
Cave told the committee: “I think the approach of highlighting a few specific uses of digital technology that are relatively distinct underestimates the very many ways in which these technologies will impact across the board, and the ways in which they relate.” He added that his main worry was the “siloisation of technological risks”.
“AI [artificial intelligence], for example, is not one single distinct risk, it’s a very general technology, more akin to steam or electricity,” said Cave. “If we imagine 200 years ago a committee like this considering the risks of the steam engine, if it focused specifically on the technology itself, then policy might be about preventing engines from exploding or people getting their hands caught in the gears. But it would miss the industrialisation of warfare, rapid urbanisation, and the creation of a new working class.”
Simon Beard, an academic programme manager and senior research associate at the Centre for the Study of Existential Risk, said the UK government’s approach to risk was dominated by a “security mindset that only considers risks in the immediate future” from behind closed doors.
“One of the really important things about the National Security Risk Assessment [NSRA] is it gives a lot of attention to attacks, a reasonable amount of attention to accidents, and very little attention to systemic risks,” said Beard, adding that the NSRA assessment itself is then kept “classified” because of this “security mindset”.
“That means it’s not open to the same amount of scrutiny, or a peer review, that certainly I would expect my risk assessment work to receive,” he said. “I think the UK could benefit a lot from opening this up and having a less security-orientated mindset for risk.”
Beard said the government could begin to address this scrutiny problem by being much more transparent about the risk assessment process, and by including a much wider range of voices in it.
“My number one recommendation for making sure that it’s done right is make sure that it’s done in a way that’s transparent, make sure that people can see what that process was, how it was applied, and how that influenced the decisions that were taken, so that actually if there were people who really should’ve been in that room but weren’t included, they can recognise that and say so and be included next time,” he said.
“That’s the strength of this process – it’s the diversity of perception that you’re able to access when you apply this stuff well, it’s not about expertise…or personal brilliance.”
Talking about horizon scanning, Thornhill said that although it may be easy to make predictions about the potential risks and effects of using individual technologies, as soon as forecasters try to assess them jointly, their predictions or calculations about the impacts are “scrambled”.
“We need to be finding people who can think laterally about the connections that are really quite obvious, but that people who are at the forefront of this technology are not necessarily thinking about,” he said.
To solve the related issue of secrecy, which inherently hides the inter-related nature of technological risks, Thornhill suggested that the UK should, like Sweden, conduct “national resilience exercises”, in which parliament, national agencies, local government bodies, central banks and private companies work together to model what would happen in “extreme circumstances”, such as a mass disinformation campaign.
“I would go a bit beyond what they do and not just look at it from the top down – from what the agencies and the companies and banks and so on are looking at – but also try to look at it from the bottom up and engage ordinary citizens in different fields to see how they would cope in some of these emergencies,” he said.
“I think it would be more interesting in a way to understand people in Mumsnet or the farming community, or teachers and healthcare workers about how they would respond in these circumstances. So fusing in national resilience exercises at a strategic level, the very practical impacts they could have on day-to-day life.”