Saturday, December 13, 2025

AGI and the hitherto Inexorable Impetus toward Human Extinction


The human species has created an ever growing number of means of its own destruction. It creates means of destruction faster than it overcomes or mitigates them. The logic of this process is that we are rushing toward extinction, and will almost inevitably get there.

  

How could this be not the case?

 

To avoid this, we would need the cumulative odds of human self-destruction to plateau in future time somewhere well short of 100%.

 

To put it differently:

 

Take the difference d between

a 100% chance of self-destruction

(minus)

the cumulative odds of human self-destruction to time t.

 

As time t grows, this difference d keeps shrinking.

 

To survive, we need d to stop shrinking. More precisely: We need its pace of shrinking to slow and keep slowing. We need this slowing itself to increase at a fairly steep geometrical pace, one such that the sum total of destruction odds, as time t continues growing, converges on a figure well under 100%.

 

 

Why we inexorably keep careening toward annihilation

 

Thus far, we have been going the opposite way from success and survival. The distance to 100% self-destruction not only continues shrinking, but shrinks at an ever faster pace (i.e., by an ever larger percentage to the remaining chances of survival at any time t).   

 

This happens because we keep inventing new and more powerful technologies. They inevitably include a complement of new and more complete ways of destroying humanity.

 

It is easier to destroy than to create or preserve; this, despite the advantage that  the vast majority of humans want to preserve not destroy humanity. We create dangers fast. We come up with cures, always imperfect, always with some time delay after the risk has begun. Every cure we find for a destruction-risk creates its own risks, often small ones at first but snowballing later.

 

People used to think nuclear weapons were the ultimate extreme form of human self-annihilation. Now we know this is not true. We have not only new biological destruction capabilities. We also have:

 

* gray goo: new self-replicating nanotechnologies that could potentially gobble up the world and destroy all life down to the molecular level.

If such a destructive innovation is possible, then, it was argued by Bill Joy, the founder of Microsystems, it is inevitable over the sum of coming time. Despite many skillful counterarguments, he was basically right – unless we destroy our species some other way, before we can get to destroying all life with gray goo.

And human self-destruction could be considered a benign option, compared to some of the even vaster destroying potentialities in contemporary science work:

 

* strange matter. It could convert all matter on earth and beyond to itself, eradicating life along with ordinary matter.

 

* ‘vacuum decay’: a collapse of a Higgs boson to its lowest energy level (its lowest level is its most stable level, and is termed the “vacuum level”). This would bring an imitative collapse of all matter around it, spreading out at the speed of light from the initial point of collapse and destroying the entire ‘local’ universe – everything that’s not expanding away from us faster than the speed of light.

Fortunately it could be a few centuries before we’re strong enough to inadvertently cause a vacuum collapse. Maybe we’ll destroy ourselves by old fashioned human war in the meanwhile and not take the universe with us.

 

* If we develop faster-than-light technologies, we might be able to destroy all of our universe, not just the vast local universe.

 

* If we’re in a multiverse and learn to communicate or move between universes, then we’ll almost inevitably figure out how to destroy the multiverse. Not just because we have some malicious actions, but mainly because we’re curious, we’re inventive, and we can’t resist the temptation doing immediate good for our fellows.

 

There was a courageous Argentinian scholar of international relations, Carlos Escude, whose work was fundamental in persuading his country and Brazil to mutually abandon their nuclear weapons programs. The world owes him a debt of gratitude for this exercise in the preservation of humanity from its risk of self-destruction. He went on to review the scenarios listed above for the risk of even more far-reaching destruction by technological advance, and republished this list supportively, as a warning to humanity, in his online portal. We can no longer pay the debt to him verbally; he was taken from us by covid. We can only try to honor it by carrying forward from the danger he overcame for us, and facing the dangers he saw facing us beyond it.

 

 

Must our solutions always create new problems?

 

Almost every viable solution to a technological risk is a new technology. Every new technology creates new risks, often greater risks than the ones it solves.  

 

There is often popular talk of anti-technology as a solution: rolling back technology, outlawing the latest technologies that we fear. But the forbidden technologies will inexorably be reinvented unless we can not just outlaw them but excise the knowledge that leads to them along with our interest in the benefits they can provide. This is in practice impossible.

 

Most such solutions also entail a genocidal reduction in human population worldwide. This would have catastrophic social consequences. The turmoil from any such program would quickly overwhelm the program and bury the political commitment to it.

 

We seem to be left with solutions to existential risks that create new existential risks.

 

 

From dumb technology to intelligent technology

 

It means we are in a fix – as long as we assume that we are using only “dumb” new technologies, meaning relatively static ones, to solve our problems.

 

However, we live in an era of increasingly “smart” technologies, ones that themselves improve at a rapid pace. We may finally have better options.

 

 

Is intelligent, self-adapting technology – AGI – a way out of this fix?

 

What if a intelligent technology could adapt to deal with all our existential dangers? It would have to be an AI, indeed an AGI – artificial general intelligence. It would be able to recognize new and emerging existential dangers along with old ones.

 

An AGI might find ways to overcome existential the dangers faster than we are creating them, or even faster as it learns to anticipate them. It would also figure out how to overcome many of our past, long-inherited dangers that we take for granted at this point.

 

An AGI would probably link up to robotic structures to implement its solutions, alongside human implementation efforts. It could be immune to the human passions and near-term interests that have hitherto prevented humanity from solving its self-created problems as fast as it creates them.

 

Through this AI-bot means, humanity might end its long, sorry record of always falling behind the pace of the existential dangers it creates for itself.

 

AGI would, to be sure, create a new existential risk of its own. It might determine that humanity itself is an existential risk to the growth of intelligence, or to its own future existence.

 

In this sense, it would be the same as all previous technologies: it too creates new problems, even while solving old ones.

 

However, unlike all previous technologies, this one would potentially contain the solution to all other present and future dangers to humanity, not just a few of them. This would be the price of its being itself a potential new danger to humanity.

 

We need to figure out a way to reduce its risk to humanity. Fortunately, the risk profile it presents is probably far less bad than the one we face in its absence.

 

 

  

No comments: