Whether by accident or otherwise, the possibility of catastrophe is, if anything, greater now than it was back then. The Apollo 11 astronauts were quarantined after landing, but there was a gap when they were picked up at sea Credit: Getty Images. Admittedly, alien annihilation is not the biggest risk the world faces. Still, while there may be " planetary protection" policies and labs to guard against alien back contamination, it's an open question how well these regulations and procedures will apply to private ventures that visit other planets and moons in the Solar System.
Adding to the alien catastrophe threat, broadcasting our presence into the galaxy may risk a potentially disastrous meeting with aliens, especially if they were more advanced.
History suggests that bad things tend to happen to populations that encounter more technologically proficient cultures — look at the fate of indigenous people meeting European settlers.
More concerning is the threat of nuclear weapons. A burning atmosphere may be impossible, but a nuclear winter akin to the climatic change that helped to kill off the dinosaurs is not. In WWII, atomic arsenals were not abundant or powerful enough to trigger this disaster, but now they are. Ord estimates that the risk of human extinction in the 20th Century was around one in But he believes it's higher now. On top of the natural existential risks that were always there, the potential for a human-made demise has ramped up significantly over the past few decades, he argues.
As well as the nuclear threat, the prospect of misaligned artificial intelligence has emerged, carbon emissions have skyrocketed, and we can now meddle with the biology of viruses to make them far more deadly. We're also rendered more vulnerable by global connectivity, misinformation and political intransigence, as the Covid pandemic has shown.
Another way that existential risk researchers have characterised this burgeoning danger is by asking you to imagine picking balls out of a giant urn. Each ball represents a new technology, discovery, or invention.
The vast majority of them are white, or grey. A white ball represents a good advance for humanity, like the discovery of soap. A grey ball represents a mixed blessing, like social media. Inside the urn, however, there are a handful of black balls. They are exceedingly rare, but pick one out, and you have destroyed humanity. This is called the " vulnerable world hypothesis ", and highlights the problem of preparing for very rare, very dangerous events in our future.
So far, we haven't picked out a black ball, but that's most likely to be because they are so uncommon — and our hand has already brushed against one or two as we reached into the urn. There are many technologies or discoveries that could turn out to be black balls. Some we know about already, but haven't implemented, such as nuclear weapons or bioengineered viruses. Others are known unknowns, such as machine learning or genomic technology.
Others are unknown unknowns: we don't even know they are dangerous, because they haven't been conceived of yet. Why do we fail to treat these catastrophic risks with the gravity they deserve? Wiener has some suggestions. He describes the way that people misperceive extreme catastrophic risks as " tragedies of the uncommons ". You have probably heard of the tragedy of the commons: it describes the way that self-interested individuals mismanage a communal resource.
Each person does what's best for themself, but everybody ends up suffering. It underlies climate change, deforestation or overfishing.
The site of the Trinity test today, beneath an atmosphere that was fortunately not set alight Credit: Getty Images. A tragedy of the uncommons is different, explains Wiener. Rather than people mismanaging a shared resource, here people are misperceiving a rare catastrophic risk. The first is the "unavailability" of rare catastrophes.
Recent, salient events are easier to bring to mind than events that have never happened. The brain tends to construct the future with a collage of memories about the past. If a risk leads the news — terrorism, for instance — public concern grows, politicians act, tech gets invented, and so on. The special difficulty of foreseeing tragedies of the uncommons, however, is that it is impossible to learn from experience.
They never appear in headlines. But once they happen, that's it, game over. In your previous book, Our Final Hour , you said we had a 50 percent chance of surviving the 21st century. How do you feel about our odds today? Well, that was obviously a rough number, but I still believe that there could be serious setbacks to our civilization, and I feel more concerned now than I was then about the fact that technology means that small groups or even individuals can by error, or by design, have a disruptive effect that cascades globally.
My concerns about this have only grown since when I wrote Our Final Hour. In the short run, I worry about the disruptive effects of cyber attacks or some form of biological terror, like the intentional release of a deadly virus. These kind of events can happen right now, and they can be carried out by small groups or even an individual. Disruptions of this kind will be a growing problem in our future, and it will lead to more tensions between privacy, security, and liberty.
And it will only become more acute as time goes on. I also worry that our societies are more brittle now and less tolerant of disruption. In the Middle Ages, for example, when the Black Plague killed off half the populations of towns, the others sort of went on fatalistically. But I think if we had some sort of pandemic today, and once it got beyond the capacity of hospitals to cope with all the cases, then I think there would be catastrophic social disruption long before the number of cases reached 1 percent.
The panic, in other words, would spread instantly and be impossible to contain. Do you think the pace of technological change is now too fast for society to keep up? Is it too fast for society?
Just look at the impact of social media on geopolitics right now. And the risks of artificial intelligence and biotechnology far exceed social media. But these things also have potentially huge benefits to society, if we can manage them responsibly.
The downsides are enormous, and the stakes keep getting higher. But these changes are coming, whether we want them to or not, so we have to try and maximize the benefits while at the same time minimizing the risks.
Do you think our greatest existential threat at this point is ourselves and not some external threat from the natural world?
I worry about human folly and human greed and human error. I worry much more about, say, a nuclear war then I do a natural disaster. You talk a lot in the book about cooperation and the need for better decision-making.
As scientists, we must try to find solutions for these problems, but we also have to raise public consciousness and interest. I consider this my obligation as a scientist. The wider public has to be involved in that conversation, and scientists can help by educating them as much as possible.
In the book, I talk about the atomic scientists who developed nuclear weapons during WWII, many of whom became politically involved after the war to do what they could to control the powers they helped unleash. That being said, Loeb ended his blog post on a depressing note.
In his opinion, humanity will wipe itself out long before the sun might. Loeb, who is the chair of Harvard University's astronomy department, wrote that humanity needs to "contemplate space travel out of the solar system. In order to do so, he added, we need to build "an artificial world" capable of bouncing between stars and their neighboring, potentially habitable planets.
This industrial spacecraft and human habitat would "represent a very major upgrade to the International Space Station ISS ," he said. Once our means of traveling to other planets and moons in the universe is secured, humanity needs to focus on duplicating itself, and other existing species, before we all get annihilated. To him, that means making genetically identical copies of ourselves, plants, and animals, and spreading those copies to other stars.
Obviously, the astronomer pointed out, that future solution won't do much for preserving people alive on Earth today. But to Loeb, its more important to ensure the longevity of our species as a whole rather than protecting "our own skin. All of his ideas aside, Loeb isn't that sure that humanity will be around to experience its demise at the hands of a brightening, expanding sun. Because the dead silence we hear so far from the numerous habitable exoplanets we've discovered may indicate that advanced civilizations have much shorter lives than their host stars.
Loeb is confident that extra-terrestrial life exists, or existed, in the universe. He is in part famous for the idea that the first interstellar object to pass through our solar system — a rock named "Oumuamua" — was an advanced alien spaceship scouting Earth and nearby planets for life. That hypothesis has since been dismissed by multiple astronomers.
In September, scientists announced they'd detected water vapor on a potentially habitable planet for the first time. The planet, named Kb, is a super-Earth that orbits a star light-years away. Kb is the only known planet outside our solar system with water, an atmosphere, and a temperature range that could support liquid water on its surface, which makes it our best bet for finding alien life.
But, as Loeb mentioned in his blog post, so far researchers have yet to discover anyone else out there.
0コメント