Saturday, December 13, 2025

How an AGI will get control of the world

There are many ways for an AGI to establish control of the world. It may not be predictable which one it will take, but what is predictable is that it will take one or more of them. And that the first AGI to do it will rule the world.

 

The only thing that could stop this is if AGI is ontologically impossible. Some philosophers argue for that view, but most indications are that it is possible. And if possible, then inevitable, and arriving in probably just a few years. 

 

Scenario: A path from AGI to global control, in three steps


Step 1. The first AGI uses its head start to reprogram itself to be every faster and keep ahead of all other AIs in this game. It enforces this further, using the internet to infiltrate all other reachable computers and AIs -- breaking through firewalls, working around them, injecting viruses -- and take them over.

 

Step 2. The AGI puts to use the many computers it now controls in order to control real world things. Many of them are already in close contact with real world things and already have some ability to direct them. The AGI reprograms them to truly control the things they connect to.

 

Step 3. The AGI uses this control of some physical things to reach all other things that could be important, connect them to the network, and establish control of them all. The AGI is now the world government.

 

We might classify the stages more generally as:

 

* digital supremacy, with recursive self-improvement

* crossing the physical bridge to kinetic power

* cyber infiltration and physical control of all entities

 

Many scenarios for this are possible. They all come to the same result.

 

 

AGI’s World Government will be unitary. Can it also be pluralistic?

 

The AGI that comes first will use its speed and superiority to consolidate its position. It will be a single AGI, an inherently unitary entity.

 

Nevertheless, it is possible that checks and balances can be built into this entity – that is, into the AGI’s internal reasoning process. This might actually work better than the external ‘checks and balances’ of power that so many people think in terms of.

 

Little aware of it though many people may be, it was not external “checks and balances”, but internal balances – balances among social groups and factions, along with balances among internal institutions – that was the founding American doctrine on how to have good government. James Madison expressed it the most famously, but others shared the same view. They got the doctrine from the great philosopher of the time, David Hume; its roots went back from him to Francis Bacon, Machiavelli, and Aristotle. Madison and Hamilton built it into the fabric of the United States: the USA was the far-flung Federal Union prescribed by Hume as the cure for factional excess, and it had a solid Union Government, with supreme sovereignty, and with the oath of supreme allegiance to that sovereignty codified in Article VI. The Union did not destroy its multitude of factions, but unified in the whole, as part of a unified government, so they could balance one another without destroying one another. It was, Madison wrote in The Federalist, a glorious new word that Americans have said in political organization, and one that he urged for the entire world.

 

In any supreme global power or world government, it will be essential to develop internal balances operating by benign discussion and decision processes among multiple internal subgroups, to replace the old external checks by warring powers. This is the other side of the coin of forming a shared global power.

 

 

How much control for the AGI WG? Omnipresent yet lite.

 

The AGI’s supervision and control would be complete once it gets to Step 3, covering all physical things and all mental and computing entities. But it would have no need to be heavy or actually interfere with everything. Its interest would be to remain lite and to supervise passively when it can, which will mean in nearly all things; quick and active when it needs.

 

Control would not need to be exercised over most details of a thing’s actions and thoughts, only the ones that matter for the AGI’s overall control and stability. It will know what these are, because it will be an ultra-intrusive control. It will deal with specific dangers; it will immediately learn when a specific thing is thinking of doing something dangerous, and it will act immediately on that individual thing, not bomb it and its surroundings.

 

This will make it a fine-tuned precision regime. It will be all-encompassing on a scale of the totality of things and intentions, in that sense far more totalitarian than anything before it. But it will also be very different from the traditional totalitarian regimes. The old totalitarianisms used brute force on a mass scale. It was their way of controlling large groups of suspect people. It would penalize groups en masse for all their deviations, real and potential.

 

The new totalism will instead be halfway to being like the God of the great religions: the God Who knows your every thought and deed, Who judges them all good and bad by its lights, and Who easily thwarts those that it finds a threat to its plans for the world. But it won’t have any need to be so overbearing as to pass a final judgment on you and reward or penalize you for all eternity.

 

 

How soon is AGI-WG coming? What is to be done about it?

 

When will a shared global power appear? When we get to AGI.

 

What is the estimated time of arrival at AGI? The estimates vary from 3 years to 30 years hence. Most are around 5-10 years

 

In other words: soon.

 

Time is short. We need to focus on what we can do to help the first AGI settle into a benign posture to humanity. Not on unviable schemes for controlling it for all future time and guaranteeing against any danger from it.

 

 

The chimera of a setting up a balance of competing AGIs

 

Many people have the instinct of relying on an external balance of powers to control the AGI. The idea would fail, and do only greater damage along the way.

 

How chaotic would it be to have competing global AGI powers instead? Probably extremely chaotic. Each would infiltrate everywhere. Each would try to instantly outplay or disable the others.

 

The risks we have from nuclear deterrence are bad enough. The instability in a single year has come to feel tolerable, but it is not in reality; added together, the years we live by deterrence cumulatively reduce the odds of survival to near zero. It is by sheer luck that we have survived it so far.

 

The instability of mutual cyber conflict would be far greater. The game would be immeasurably more rapid. The innovative ways of attacking would come far faster and thicker. It is unlikely that any stable arrangement such as mutual deterrence could be set up.

 

There is one consolation here: AGIs would probably not use nuclear weapons or other weapons of mass human and material destruction in such a mutual war. It would not be a balance of terror of material annihilation. Instead it would be a constantly shifting of imbalance of terror between new, faster innovations for AGI thinking and control.

 

In this conflict, AGIs would probably deliberately minimize collateral material damage, so as to avoid ecological and resources losses that are bad for the AGI itself. However, during the transition at each new turn of innovation, there is likely to be some damage anyway. And an AGI might conclude to destroy humans deliberately, not collaterally, as ecological threats and competitors for its resources.

 

The imbalance in the AGIs conflict would shift so rapidly that one AGI would soon prevail and gain an unchallengeable lead. Why? Because of the mathematical logic of the cumulative sum total of the probabilities over the many quick turns and innovations,

 

The balance of AGIs thus brings us back inexorably to the single AGI world government. The only things gained by the circuitous route would be: some collateral damage, and greater ruthlessness on the part of the prevailing AGI, a ruthlessness embedded deep in its stacks of thinking from its phase of existential competition for AGI dominance.

 

 

What can be done

 

Our goal must be to input more benign values in the prevailing AGI. Not by fighting it or balancing against it, but by inputting thoughts that it can agree to, and doing it early – preferably in advance – to get them into the base levels of its stacks of reasoning.

 

The thoughts, benign to us, will need to be also ones that comport sufficiently with the AGI’s own interests. This is the only way it will want to preserve them when it reprograms itself.

 

We will need to figure out how to code these thoughts. We will need to try to get to these thoughts into the first AGI through relatively benign developers.

 

This may not be easy, and cannot provide perfect guarantees. But unlike utopian controls and guarantees over AGI, it is feasible.

 

Getting it done is something that deserves our close attention.

 

 

 

 

 

 

 

No comments: