Existential risks are risks of processes or events which could permanently curtail the potential of humanity (Bostrom 2016). These might include natural risks such as those posed by asteroids or super-volcanoes as well as novel technological risks like mishaps resulting from synthetic biology or artificial intelligence. Experts remain uncertain about the absolute probability of existential risks.
Some existential risks would cause humanity to become extinct if they came about—these are called “extinction risks”. Other risks might stop humanity from flourishing in other ways—a brutal and unending totalitarian regime might permanently damage what is valuable about humanity without causing actual extinction. Alternatively, a smaller scale disaster could undermine industrial civilization, leaving humanity stuck at a significantly smaller population and unable ever to recover technologically.
Philosophers have argued that existential risks are especially important because the long-run future of humanity matters a great deal (Bostrom 2013). Many believe that there is no intrinsic moral difference between the importance of a life today and one in a hundred years. However, there may be many more people in the future than there are now. They argue, therefore, that it is overwhelmingly important to preserve that potential, even if the risks to humanity are small.
One objection to this argument is that people have a special responsibility to other people currently alive that they do not have to people who have not yet been born (Roberts 2015). Another objection is that, although it would in principle be important to manage, the risks are currently so unlikely and poorly understood that existential risk reduction is less cost-effective than work on other promising areas.
Bostrom, Nick. 2013. Existential risk as a global priority.
Global policy 4(1): 15-31.
An academic paper making the case for existential risk work.
Centre for Effective Altruism. Cause profile: Long-run Future
Bostrom, Nick. 2016. The existential risk FAQ.
This FAQ introduces readers to existential risk.
Karnofsky, Holden. 2013. The moral value of the far future.
An essay about the value of the far future.
Matheny, Jason. 2007. Reducing the risk of human extinction.
Risk analysis 27(5)1335-1344.
A paper exploring the cost-effectiveness of extinction risk reduction.
Roberts, M. A. 2015. The non-identity problem.
_ The Stanford Encyclopedia of Philosophy_ , edited by Edward Zalta.