At several points in human history, there have been great transitions that accelerated our progress and shaped everything that would follow.
Ten thousand years ago, the first Agricultural Revolution allowed us to establish cities and civilisation. Four hundred years ago, the Scientific Revolution gave us a reliable method for understanding the laws of nature. Two hundred years ago, the Industrial Revolution launched us on a new trajectory of rapid economic growth.
But there has recently been another transition more important than any that has come before. With the detonation of the first atomic bomb in 1945, a new age of humanity began. Our rising power finally reached the point where we could destroy ourselves – the first point at which the risks to humanity from within exceeded the risks from the natural world.
These extreme risks – high-impact threats with global reach – define our time. They range from global tragedies such as Covid-19, to existential risks which could lead to human extinction. By our estimates – weighing the different probabilities of events ranging from asteroid impact to nuclear war – the likelihood of the world experiencing an existential catastrophe over the next 100 years is one in six. Russian roulette.
This is clearly unsustainable: we cannot survive many centuries operating at a level of extreme risk like this. And, as technology accelerates, there is strong reason to believe the risks will only continue to grow – unless we make serious efforts to increase our resilience to these threats.
We do not know which extreme risk event will come next. It might be another pandemic. Or it might be something completely different, such as a threat from emerging technology. AI progress, for instance, is rapid, unprecedented, and transformative. But we do not even need to look at what AI might do in the future to see the extreme risks it poses. Even with the AI we already have available today, we could face an extreme risk event, either through accident or misuse.
To do justice to the seriousness of these threats, our political leaders need to give them more attention than they have so far. They must go beyond merely ensuring we are better prepared for the next naturally occurring pandemic such as Covid-19, and transform the way we manage extreme risks across the board.
There are promising signs that the UK Government is starting to appreciate the scale and complexity of this challenge. In the coming months it is set to produce an AI strategy, a review of its biosecurity strategy and a National Resilience Strategy. As we noted in Future Proof, a report we wrote earlier this year, it is vital that extreme risks are at the heart of all these efforts.
Three broad areas require maximum attention. First, we must transform our biological security. The UK remains vulnerable to a wide range of accidental and deliberate biological threats, which risk even worse consequences than Covid-19. These risks will only grow in line with the rapid developments being made in synthetic biology and biotechnology, which offer harrowing prospects of misuse.
One sole body in the UK – ideally a new National Centre for Biological Security – should be charged with ensuring preparedness for the full range of biological threats the country faces. Its focus should be on preventing and countering the threat of biological weapons, which should be treated as an equivalent national security threat to that of nuclear weapons. It should also develop effective new defences to biological threats (for instance, by pioneering new technologies such as clinical metagenomics), and build talent and collaboration across the UK biosecurity community.
Second, we need to boost our resilience to the threat posed by artificial intelligence. One way to do this is to create a “compute fund” providing free or subsidised resources to researchers working on issues such as AI safety, security and alignment. Access to large amounts of AI compute is critically important for AI safety. Many recent machine learning breakthroughs were reliant on large compute budgets that academia and civil society simply cannot afford. This has led to research being skewed towards short-term aims rather than developing socially beneficial applications. This cannot be allowed to continue.
Third, we need to ensure that our national security community gives the threat posed by emerging technology the degree of attention it deserves. Appropriate steps must be taken, for instance, to ensure that AI systems are not incorporated into NC3 (nuclear command, control and communications). As evidenced by the sobering history of nuclear near-misses, introducing AI systems into NC3 increases the risk of accidental launch. Surely no benefits can be worth this additional risk.
Cyber operations that target the NC3 of Nuclear Proliferation Treaty signatories also need to be avoided. As the G7 president this year, the UK has an excellent opportunity to advocate for this policy norm internationally by establishing a multilateral agreement to this effect.
We truly are living in a remarkable time in human history. Policymakers need to rise to the moment before it passes. Just as Covid-19 triggers an immune response in each individual, protecting them from reinfection, so the pandemic has triggered a social immune response across the UK, where there is public will to prevent the next extreme risk. But like the individual immune response, this social immune response will fade over time. Before it does, we need to seize this opportunity to put in place lasting protections to safeguard the country – and the wider world – from extreme risks.
Toby Ord is a senior research fellow in philosophy at Oxford University, and author of The Precipice: Existential Risk and the Future of Humanity
Angus Mercer is the executive director of the Centre for Long-Term Resilience, and a research affiliate at Cambridge University’s Centre for the Study of Existential Risk
- 🌎 Sign-up to WIRED’s climate briefing: Get Chasing Zero
- These are the 100 hottest startups in Europe right now
- The slow collapse of Amazon’s drone delivery dream
- Forget Impossible, the fungi renaissance is here
- This match will change FIFA forever
- The race to save the Underground from flooding
- Mark Zuckerberg’s Metaverse already sucks
- 🔊 Subscribe to the WIRED Podcast. New episodes every Friday
This article was originally published by WIRED UK