On January 25, 2018, the Doomsday Clock struck two minutes to midnight. Lawrence Krauss and Robert Rosner, members of the Bulletin of the Atomic Scientists’ Science and Security Board (which maintains the clock), remarked that “the world is … as threatening as it has been since World War II.” This kind of rhetoric is quite prevalent in our media, and the clock serves as a symbol of it. Yet the clock itself is nothing more than a propaganda device. It does not provide a scientific measure of human extinction risk—the “two minutes to midnight” time merely signifies the risk is greater than it was in previous years. And even this assertion is suspect. For example, the big hand was seven minutes from twelve during the Cuban missile crisis.
This broad trend of fetishizing the end of humanity is usually premised on misguided theories and sometimes influenced by ulterior political motives. Most of the commonly cited methods of omnicide (human extinction as a result of human action) are not likely to occur at all.
For instance, overpopulation is often prophesied. Some of the theories connected with this are based on the mysterious extinction of the Rapa Nui civilization on isolated Easter Island. According to some theorists, this occurred when the population expanded too much and thus their limited natural resources became overexploited, resulting in a devastating famine and civil war. However, there is substantial evidence to suggest that the civilization was not destroyed by war or overpopulation. Instead, experts believe its collapse was due to a combination of severe variations in environment conditions and diseases brought on by the arrival of the Europeans.
The idea that societies have a natural tendency to grow too large goes back to the nineteenth-century scholar Thomas Malthus, who argued that famine may be a necessary check on population growth because resources increase more slowly than population. These ideas were used to justify some examples of British inaction during the Great Irish Famine. Yet Malthus’s thesis underestimates the ability of humans to innovate and extend their resources. For example, crop yields have exploded in the last seventy years, due to developments in science and technology. Such developments have not halted: crops can now be genetically engineered to withstand adverse conditions. They can be grown in vertical formations and without soil or natural light. They can be monitored incessantly by sensors, real-time analytics and drones, which improve yields by informing farmers where to administer water, fertilizer and pesticide, and when to harvest.
Furthermore, growth does not mushroom until a state of overpopulation is reached, as Malthus suggested. Instead, as societies get richer they tend to have fewer children. This is reflected in the fact that the world growth rate is slowing and its population is expected to peak at 9.22 billion in 2075. Even if these projections are inaccurate and fertility rates swell unexpectedly, it is unlikely such an event would threaten the very existence of the human race.
Another common theory of extinction is that climate change may trigger a runaway greenhouse effect, whereby increasing temperatures cause more water to evaporate and this water vapor—a greenhouse gas—in turn, warms the atmosphere even more. This cycle would trigger the boiling away of the oceans and the extinction of life on Earth. However, almost all lines of evidence have suggested that this is implausible, even in principle. One reason is that water absorbed by the atmosphere tends to form clouds, and clouds cool the atmosphere by reflecting more sunlight away from Earth than they re-radiate down to Earth.
The idea that a nuclear winter will result from a nuclear war has also been criticized. The basic theory is that the detonation of many nuclear bombs could trigger copious firestorms that would inject a significant amount of soot into the atmosphere. This would block sunlight, thus cooling the Earth. In response, crops would fail and famine would become unavoidable. But even the physicist who coined the phrase nuclear winter subsequently discounted the idea. Moreover, the popularisers of the thesis, the TTAPS group, have been widely criticized for their assumptions and the simplicity of their model. Some of this skepticism was vindicated in 1991, when smoke plumes from the Kuwait oil fires failed to reach the higher levels of the atmosphere where they could have blocked out substantial sunlight, contrary to the predictions of the model. In fact, many scientists believe the hypothesis was propelled by political motivations. This was corroborated by the leading British scientific journal Nature in 1986 when it noted the political erosion of scientific objectivity: “Nowhere is this more evident than in the recent literature on ‘Nuclear Winter,’ research which has become notorious for its lack of scientific integrity.”
Well if the new age of physics could not threaten humanity, perhaps the new age of biology could. After all, recent advances in gene editing have made it possible to create a pathogen which could exterminate the entire human race. However, psychologist Steven Pinker casts doubt on this notion in his book Enlightenment Now. Pinker’s argument is threefold. First, it is difficult to re-engineer a complex evolved trait in a pathogen by simply inserting a gene or two, since the effects of any gene are intertwined with those of the rest of the organism’s genome. Secondly, there is a trade off between virulence and contagion. In other words, germs that kill their hosts quickly may struggle to spread quickly. But to precipitate a pandemic they must do both. Thirdly, advances in biotechnology also help the good guys (and there are many more of them) develop vaccines.
A more widespread fear is that of superintelligent machines. Bostrom provides a thorough discussion of this in his book Superintelligence. The risk, as physicist Stephen Hawking once observed, “isn’t malice but competence.” It is that, given a goal, the artificially intelligent machine (AI) may carry out its actions with unintended consequences. The common example is that the goal make paperclips may ensue the transmutation of our bodies along with the rest of the Universe into paperclips. Even the goal make only 1000 paperclips might cause the AI to convert our bodies and the planet into devices that count the paperclips in multifarious ways. This is because the AI could be a sensible Bayesian agent who always assigns a non-zero probability to not fulfilling the goal due to uncertain perceptual evidence or false memories, for instance. The only way to reduce this probability is to gather more evidence. What’s more: if the AI truly is superintelligent, we may be powerless to stop it.
While this may seem worrying, it is not—even assuming such an incredible machine could be built and that its intelligence would improve uncontrollably quickly. This is because there are ways to prevent such myopic behaviors. For one, tests could be conducted. Computer game environments may provide safe spaces in which previews of actions could be exhibited clearly. Additionally, intellectual work could be offloaded to the AI to reduce the chance of mistakes. Successful goals may be orientated around asking the AI to determine what we intend, with conditionals preventing any harm (such as don’t cause human extinction). Bostrom expounds variants of this thinking in his book.
Yet that is not the only type of robot that threatens the human race, according to doomsayers. Nanobots, small machines the size of molecules, may be programmed to self-replicate to improve efficiency. In order to do this, the nanobots would likely require carbon, of which humans and other living things are a great source. Thus, if a break out occurred, the biosphere might be dismantled, in which case the planet would be denuded of its rich and varied vitality, and blanketed in an all-encompassing grey goo—a homogenous sea of insatiable nanobots.
Nevertheless, this scenario is more like science fiction. Even the populariser of this concept, the father of nanotechnology, Eric Drexler, admits as much—nanobot self-replication is unnecessary. Instead of building lots of tiny, complex, free-floating robots to manufacture products, it would actually be more practical to use simple robot arms in larger factories. There are other flaws in the idea, too. For example, it is implausible that the nanobots would be consistently recalcitrant, and that their energy levels would not deplete.
Thus, extinction by anthrogenic means seems unlikely to occur any time soon. Of course, there may be hidden risks. There certainly needs to be more research into these topics. Shamefully, the number of published academic papers on existential risk is much smaller than the number published on dung beetles or Star Trek. However, the Doomsday Clock is not the best way to correct this, since its fearmongering could be counterproductive. It could lead to hopelessness and a fatalistic attitude, which reduces activism. According to historian Paul Boyer, it could also encourage arms races.
Worse, it could create a pervasive enmity towards technological development, therefore threatening to choke the very engine which propels humanity forward: lifting the poor out of poverty and healing the sick. Even as technology ushers in new risks, it conquers old ones. We now have the ability to grow crops in spite of harsh weather conditions and to invent vaccines that prevent pandemic disease outbreaks. Indeed, the solution to many of the problems that await us is not less technology, but more. Dangerous climate change can really only be thwarted through widespread nuclear power, electricity grids and powerful batteries. Existential risks in general can best be reduced by colonizing other planets, such as Mars. Both getting to and living at these places will necessitate new technologies (most of which are already under development).
There are many ways to improve the world in which we live—but frightening people isn’t one of them.