Near-Existential Risk

Mar 28, 2021 16:17 · 579 words · 3 minute read Longtermism

Longtermism is a moral theory that says that most of the value we (humans) ought to care about lies in the future, and so we should look hard for opportunities to influence the future in positive ways. Longtermism arises as a natural consequence of three ideas:

  1. Utilitarianism: We should improve the lives/subjective wellbeing of as many people as much as possible, evaluated by summing improvements over lives.
  2. Future lives are just as valuable as present lives.
  3. The future is going to be very long, and contain many lives.

I want to pause for a moment on the the third idea. How long is the future? Modern humans have been around for something like 500,000 years. The Earth will likely remain habitable for billions of years, meaning that even without leaving Earth the future of humanity could plausibly last thousands of times longer than all of human history up to this point. Then add in that more people are alive today than at any point in history, and the overall number of humans seems likely to be in the billions for the foreseeable future, and you can see just how many person-lives it might contain.

This makes a strong case for caring about existential risks, namely risks that could cause human extinction or otherwise permanently harm that long-term future. An example of the latter might be technology enabling an oppressive totalitarian regime to stay in power forever.

I worry though that longtermist efforts focus too much on existential risks at the expense of what I’ll call near-existential risks.

Let’s focus on the extinction side of existential risks. The corresponding near-existential risks are near-extinction risks. The longtermist case for worrying about near-extinction risks is that near-extinction risks beget actual-extinction risks at a high enough rate that we should have a broad interest in averting not just the extinction of humanity but also the extinction of 99% of humanity. This is not just because near-extinction and extinction are immediate tragedies of comparable magnitude, but also because in situations where 99% of people die society is reduced to a very vulnerable state where the rest of humanity could easily perish too, and along with it the whole of that long future.

This vulnerability arises because most of the modern technologies that let humanity handle crop failures and droughts and pandemics rely on a large industrial base and experienced workforce that would be hard to support with many fewer people. Even setting aside equilibrium considerations like “Could we support the factories needed to manufacture and distribute mRNA vaccines with 99% fewer people?” consider what happens when society suddenly loses all the knowledge and experience and relationships that make the world go. We could well struggle to reliably retain certain basic knowledge about manufacturing mechanical devices or metallurgy, let alone the detailed quantum device engineering that goes into making computer chips. This puts us in a much more vulnerable state, and makes it much more likely that an unlucky asteroid strike or pandemic or famine is actually an extinction event.

So what I’m getting at is that near-extinction risks inherit some of the moral weight of actual-extinction risks by virtue of making actual-extinction scenarios more likely.

I think that similar logic applies in the case of non-extinction existential risks. To pick one example, scenarios with long-term but not-permanent oppression are similarly scenarios in which society is less-able to handle other risks, which means that those scenarios inherit some of the moral weight of actual-existential risks.

tweet Share