by Ira Straus
A World Government has been an ultimate
dream of philosophers for centuries -- and an ultimate nightmare for their
opponents. Technology impelled it forward. It was a goal of mass movements
after World War II. But it was rarely seen as a practical near-term prospect.
The
old ideological arguments on world government, pro and con
A Federal World Government has been the
ideal of leading philosophers for centuries, ever since the Enlightenment. They
argued this with greater urgency in face of world wars, nuclear weapons, and
totalitarian regimes of Left and Right, holding it was the only enduring alternative
to human extinction through our ever deadlier technological wars, and the only
way to stanch the totalitarian temptation and secure liberty for the future.
Mass movements emerged for it in the mid-1900s. Many democratic governments became
advocates of a federal world government. Some wrote it into their
constitutions.
Ideologically, world government was an ideal
for leading classical liberals and for moderate socialists alike. From Mill to
Hayek, liberals saw it as the only way to make both basic functions of good
government – securing life and securing individual liberty – work. Moderate
socialists also saw it as the only way to make socialism work as they intended
it and idealized it, not as just a cover story for an even more brutal nationalist
state.
At the same time, world government was
a nightmare to anti-government libertarians, and for national and totalitarian
socialists of Left and Right. In the 1930s, and again in the 2000s, their
movements were cutting-edge and controlled major governments. They gave rise in
the 1930s to the most terrible world war ever, and to a revulsion against
nationalism and totalitarianism. Nationalism nevertheless came back in most of
the world in the proliferation of national states after 1945, and in the West as
the cutting edge illiberalism since 2000.
The
argument from the technological risk/control deficit
The growing risk/control deficit was
the argument for urgency of world government from 1914 to 1960. Humanity has never
been getting our threats to our own survival under greater human control,
despite constant work on this. We have not been moving toward structures strong
and global enough to make regulation effective for getting technological
threats under control. Instead the dangers have kept multiplying and worsening
with our new technologies. Our nationalist competition to get them first makes
it impossible to stop them or even slow them.
We face meanwhile the brute fact that have
not been moving toward a free and federal world government, despite more than a
century of organized movements for this. Weak international organizations have grown
as stopgaps, and have been weakening further in recent decades.
The idea of getting technological risks
under the control of humanity through world government has itself faded out of
discussion since the 1960s. Instead, humans have been proudly spinning
technology ever further out of control. Benefits and dangers have multiplied in
tandem with one another. The benefits are irresistible, and are necessary for
getting us through the present; the dangers are terminal for the entire future.
If
this suicidal time-lag is the human condition, do we need super-human remedies?
This has in some degree always been the
human condition. We have naturally been quick to use technologies that have
brought visible gains. We are aware of how cruel it is to refuse to use them on
traditional grounds. The negative side-effects are noticed only later. The
technologies are restrained, corrected, supplemented or superceded to deal with
for environmental consequences -- still later.
This syndrome goes back thousands of
years with the human animal, and millions of years with other species. With
fire came first wood burning for fuel. Then burning of fields to clear them for
agriculture. Then a field would be exhausted. People simply moved to clear more
areas. Thoughts of changing the habits came much later.
What has changed today? The more
deliberately technologies are developed, the faster we have also noticed the
side effects and begun to think about compensating for them. The time lag
between problem and correctives has been greatly reduced. But the advances are
still made and assimilated much faster than the correctives. And the advances
are coming faster and thicker, with ever greater risks.
There is a natural time lag in seeing
and acting on the consequences of an advance. Its intended effect comes
directly and is noticed because anticipated. Later, the side effects grow to
become noticeable. The technology is taken up with alacrity and self-applause.
This effect is compounded by interested instinct in making life better for
oneself and one’s fellow – a natural, legitimate, humane instinct. There is
practical laziness in observing hindrances to this humane, intelligent advance,
and mental resistance to making full note of them and acting to fix them.
With time came the first bits of
leisure and learning. Technologies grew faster and more powerful, making the
time lag the ever more harmful. To human credit, the time lag has shrunk
greatly. Making environmental correctives has become an entire industry of its
own, and preferred sensibility in a large subculture. But pace of technology
growth has inevitably grown even faster as knowledge grows. The interested
subculture inevitably remains, producing a culture war between the pro-interest
and anti-interest factions. Despite our incorporation of a norm of looking for
side-effects, a time lag intrinsically remains between the new technology and
noticing the side-effects; near-term interest only adds a further lag in
adjusting for them. “The poetry comes before the plumbing”, to borrow a
metaphor from a recent book.
The lag is exacerbated by the other
side of ethnical thinking: the instinctively initial side, the one that notices
and builds on the positive, humane effects. Pro-technological development
doctrines cannot help but continue spreading. In the 1970s, the
environmentalist New Left came up with the “small is beautiful” doctrine which
quickly became an orthodoxy in the West, since it appealed both to Rightist
libertarians and to Leftists who wanted to distance themselves from the Soviet
version of Communism. In the 1980s, freedom for the individual and preference
the small and the initiative was seen – rightly – as a way that free societies naturally
got ahead of totalitarian ones, once innovation kicked in to compensate for
forced totalitarian sprints.
Meanwhile technologies of humanity’s self-destruction
have kept multiplied. They have also grown more total in their destructive
potential. nuclear and old-biological means are crude today; they would kill
only part of humanity and set it back only centuries. Today there are
nanotechnologies foreseen that could destroy all life on earth, through the
“grey goo” of unlimited self-replication by minimally intelligent nanothings;
things we cannot resisting experimenting with in self-replicating form for
their scientific and medical value. And “mirror life”; created by reversing the
left-right chirality of existing molecules and life forms, it too is being
actively worked on for medical and practical purposes. ‘Strange matter’ could
destroy even more, on a still more microscopic level, with its lower, more
relaxed and stable form causing an almost seductive conversion of ordinary
matter into more of the strange kind. Higgs vacuum collapse would do the same
on the smallest level of all, freeing the subatomic particles to fall to their
lowest, or most relaxed and stable, energy levels, spreading at the speed of
light to collapse the entire local universe.
There are experiments to learn more
about these, with a view in part to how to avert them. The experiment could meanwhile
set the universal destruction in motion far faster than they could find an
antidote. And protective learning only protects for a time against one path to
the danger; universe-destruction is forever.
Humans cannot resist the temptation to
reap the immediate benefits, benefits that could be enormous and endure a long
time. The temptation to find out, to get it all over with. And in some cases,
the temptation of thanatos, the
death-wish posited by Freud: the wish to get past all the troubles and tensions
of life.
As technological power grows, the
experiments grow more effective, both for protection and destruction. Some console
themselves that it would take a supercollider with the mass (mass, not just
diameter) of the entire earth to hit a Higgs boson fast enough to bring about
vacuum decay. This turns out to be a poor consolation: at our exponential pace
of technological progress, that capability is much nearer than it sounds.
AI:
a technological silver lining
One of our new technologies, AI, offers
a possibility of getting this technological development back under a regulatory
control where it protective side could potentially outpace its destructive
side. It too risks destruction of humanity, to be sure, not through blind microscopic
grey goo as does nanotechnology, but through the logically elaborated
intentions of the AI.
All our other methods and efforts at
this have failed, and can be predicted with considerable confidence to continue
failing in any proximate time frame. This one might not.
It is a Hail Mary pass for getting
control of our burgeoning means of self-destruction. It is necessary. But it
has all the dangers of a Hail Mary pass.
A
new form for liberal government?
Advocates of modern government -- with
a delimited scope, strong in its sphere, yet a protected sphere for individual
rights, a “federal” government-citizen relation -- go all the way from Hobbes
to Hume to Hamilton to Russell and the international federalist movements. They
have all pointed out that a modern central government maintains order mostly by
what we might call ‘microscopic’ controls, restraining and penalizing
individual violators. It does not have to go to war against collectivities to
restrain or catch violators, as pre-modern governments and loose international associations
of collective entities have to do. Hobbes pointed to the need for government to
penetrate to the individual citizen, not function as a supervisor of feudal
magnates which would have to act by controlling or warring on collectivities.
Locke pointed to the greater potential for liberty in that this more powerful
penetrating mode of government, compared to the old feudal enforcement on
collectivities. Russell pointed to the need to raise this form of government
over individuals to the international level and stop falling back on war to
decide issues.
This solution seems outdated, now that
catastrophic violations or risk-taking can be done by technological entities on
micro scales smaller than even a small human group.
Nevertheless the Hobbes-Hamilton view
might be restored by combination with AI technology. The very ability of AI to
surveil everyone means also that AI can isolate criminally dangerous persons
far better than past policy. And AI can penetrate beyond the human individual
to a vast number of small developments -- developments not just in individual thought,
communication, and action, but in small material changes. It could potentially
enforce on that almost-infinitesimal level.
AGI could thus in theory restore a path
to a libertarian world government, perhaps more libertarian than ever before.
This is the flip side of the risk of a totalitarianism more extreme than ever
before.
It is frightening that the stakes are
so high. The fear should not lead to looking the other way. We need to use it as
a reason to explore the prospects and see what if anything can be done to favor
the preferable ones.
We’re
not getting there by any other path
Biological brains have long-evolved neurons
and neutral networks. They are - as yet - more subtle than are computers: more
efficient in their number of steps for information-processing, more generalist
(comprehensive, adaptive to new subjects) in their thinking, and - probably -
more self-regulating.
However, computers are usually faster.
They have electronically-based networks, which are must faster than biological
ones. They operate in layers similar to neural networks; let us call them ‘artificial
neuron networks’. They are rapidly evolving. They act at a speed of electrons
(speed of light) rather than biological neurons. They absorb vastly more bits
of information than do brains. What slows them down is that they need thus far
a larger (more inefficient) number of information bits to process, and can need
to go through more layers of artificial neuron networks for processing, in
order to reach a somewhat sound conclusion. It is an advantage that humans have
from literally billions of years of living things processing inputs, resulting
in optimized neutral networks.
But electronic networks are likely to
overcome this deficit in far less than billions of years and become comparably
efficient if not more so, alongside their advantage of speed in processing at
each node. It is reported that China’s “Deep Seek” AI has already considerably
streamlined the electro-neutral network, using a system that needs only 10% of
the compute power that the American “Frontier” model needs. It is also reported
that China’s “AntNet” has, by reversing who asks the question and who answers, greatly
enhanced computer learning.
As explained above, electronic computer
softwares, in tandem with electronic hardwares, have evolved to organize ‘artificial
neuron networks’ -- networks of electronic conductors and signalers, structured
to work in a manner akin to the way brains arrange what we call neural
networks. The heart of it is a similarly configured, multiple-layered network
for gathering informational inputs, discriminating among them, and processing
them, but with an electronic material substratum.
Living brains evolved over billions of
years to do this organizing of inputs, through the living layers of input
processing, in order to reach ever more relevant, subtle, discriminating,
efficiently sorting (linking inputs to appropriate next network layers, and
discarding probably useless inputs), operationally effective, and fruitfully
insightful (with potential for bearing fruit) conclusions.
Our electronic networks are in many
respects not nearly as far developed. However, they are built by humans with
their own evolved capabilities, and build on what we understand about layered
networks of processing from the long-evolved neutral networks. They are growing
rapidly more effective, partly because we understand our own brain’s neural
networks better and better, partly because we learn more and more from working
with the computers about the nature and potentialities of neural networks
(including of our own minds). Also, they can more easily be reconfigured than
biological neuronic network, when we figure out how they could be configured
more efficiently and for greater accuracy. And they can potentially
self-reconfigure, in a way that the human brain has not yet been able to do –
except by attaching these electronic neural networks to our brain, as Musk’s ‘neuralink’
undertakes to do, in an as-yet quite rudimentary way.
AGI is likely to be self-improving, as
it will be able to understand itself probably much better than we can, and
could self-develop powers to improve itself – improve both the structuring of
its material substratum and the efficiency of its processing habits. Once that
happens, it is likely to achieve an ‘intelligence explosion’: a curve, but one
going up at an electronic pace so rapidly accelerating as to seem -- to those
of us who are still operating on biological time intervals -- as if it were
going straight up to an unimaginable height at a single interval-instant in
time (thus the term “singularity” for it; it feels to us like going straight up
all the way to infinity in an instant, as in a literal mathematical singularity).
Is AGI and machine consciousness
impossible?
There are skillful arguments, such as
those of Roger Penrose based on Godel’s proof, for its impossible. But there
are stronger arguments for its possibility.
AGI is either impossible or inevitable.
If it’s possible, then it’s inevitable.
Whatever is possible is – if we
multiply by enough time – inevitable. We’re speeding toward this probable
possibility in a short time. If it really is a possibility, then it’s
inevitably coming soon. The only thing that could stop it would be its
impossibility – or the prior destruction of the human species and of everything
that could work on AGI.
Models
for control of a central AGI / for its coexistence with human pluralism
I will pass over here the theoretical
possibility that there will a pluralism of many AGIs. It is not only
questionable that this would be desirable, as it would entail an unstable
struggle for dominance among AGI’s; it is also in any case more probable that
the first AGI will race ahead, will be unmatchable, and will be able to act
preemptively to stop other AGIs from emerging to compete with it.
We will focus here on looking at the
more realistic options for inputting an acceptable tolerance for pluralism and
humanism into an AGI.
1. A Council of uplifted humans,
sufficiently quick and smart for the AGI to respect its joint decisions, which
it can take by a vote – Tony Czarnecki
2. AGI decision, but channels for
humans to continue to dialogue with the AGI on a consultative basis, and make
thought-inputs into the AGI deliberation and decision-making. Uplifting of
humans so the AGI can respect their input more. -- Ira Straus
3. Preprogamming the AGI to respect
human input.
4. Tolerance of freedom of lesser
beings to decide much of their own business, for their pleasure and
cooperation, leaving the AGI to control emerging threats against them and
against itself.
5. Need to continue the dialogue on
ultimate open or unresolved questions, such as the Socratic question of what is
the good in itself, what is intrinsically valuable. Necessity for this of
maintaining plurality, including of different species (AGI and human) with a
qualitative difference in their experience. – Straus
6. Utility to the AGI of plurality and
federalism of voices. They provide the negative feedback loops that are needed
for a self-regulating capacity -- Andreas Olsson
Would the arguments for pluralism,
outlined in 5 and 6 above, be convincing and conclusive to an AGI? How could
arguments and methods such as these be preprogrammed into an AGI, in a way that
would remain convincing to it once it starts reprogramming itself?
Can
we make our arguments convincing to an AGI?
Preprogramming AI with our preferences would
almost inevitably be in ways that won’t survive in their initial form, when an
AGI takes them over and reprograms itself. We need to think more in terms of
using our programming of AI as creating communication channels with AI’s
thinking, and means by which we might prod it constructively to channel that
thinking into its self-reprogramming; our arguments for such channeling could
be more persuasive than trying to issue permanently binding instructions to it.
An AGI might be hoped to factor some of
our preprogrammed suggestions -- if it finds them persuasive -- into the superseding
programs it makes for itself. Its language, somewhat like a superior language
game of a traditional God, would be one that incorporates the language-thoughts
of humans and enables it to still communicate with humans in their own language
game.
AGI will be an all-intrusive, infinite
totalitarian nightmare, an idealized 1984.
Or it will be a libertarian paradise, with the global federal power so
intrusive on a micro scale and so efficient as to be able to tolerate a lot of
decentralization and autonomy for us lower-level beings. Or both.
Federalism,
not Democracy, is our future
‘Federalism’ is a better metaphor than
‘democracy’ for relations and decision-process between beings radically different.
Their difference becomes an inherent radical inequality. Equal voting among
them would seem absurd.
AGI would be radically different and
unequal vis-à-vis humans. There cannot be a truly ‘democratic’ relation between
AGI and humans. There can be federal relations. We should concentrate on models
for this.
AI in turn might get an ‘uplifting’ by
getting parts of human sentience linked to it. It might develop quasi-human
feelings based on that. This will enable closer machine-human interactions and
communications, and could alter the appropriate federative model.
Humans alone are meanwhile also
becoming mutually unequal through getting different AI implantations. Implants
are likely soon to rise to the level of an ‘unplifting’, adding machine
speed-intelligence and linking the brain to a bottomless pool of internet data.
Some humans will want to stay “all natural” and refuse the uplifting. There will
also be some diversity of upliftings. This raises serious questions about the
future of democracy even among humans alone, unless considerable federal-style
accommodations can be built into it for the radical intellectual inequality.
A recent model for democracy in an AGI
world in fact is a federal model among humans. Written by Tony Czarnecki with
an eye to our rhetorical sensibilities for talking “democracy”, its democracy
would be within a Council of the most fully uplifted humans, who would vote among
themselves to make the common decision for humanity. That would indeed be the
best protection that even the “all natural” human cohort could get in a world
otherwise totally dominated by an AGI. It is in fact a federalism, with a
natural aristocracy that is artificially
privileged in decision-making, not a democracy.
We need more such models.
AI
ruthlessly overrides our old priorities and old business
AI is quickly gaining relevance. It is
coming into considerable discussion. It’s a place where we need to involve
ourselves and find useful things to say before it’s too late – and where
whatever we say is likely to have real resonance, unlike, sadly, discussion of
decades-old proposals for national and international reforms.
We don’t feel ready for this. We don’t
feel up to date on the subject. But soon we’ll be even more out of date on it.
It’s rushing up on us and will soon simply pass us by if we don’t start
engaging. Better to begin thinking and saying things now, ill-prepared though
we may feel. The truth is that everyone is ill-prepared for this, even those
who are most closely involved in creating AI.
That is why we are publishing and
putting out thought pieces on this for the public. We put them out as working
papers for the urgent policy dialogue, not as official stances or consensus
products of an organization. In so doing, we are coupling two intertwined
concerns: timeliness of the product and its quality. Delay would erode
intellectual quality even more than haste would. Timeliness is essential for
interactivity with the fast-moving public and elite discussion on AI.
AI’s
role advances irrespective of the philosophical questions
AI interjection into human
decision-making is already taking place in Pentagon, drones, cars. AI likely in
the future to interject itself within human individual thinking and decision,
as continuous assistants and possibly through implants that will communicate to
us faster than our own brains can think; along with making inputs into
collective decision-making processes, also faster than our own (collective)
decision-making processes.
This role will inevitably increase. In
this sense, it does not matter what is the truth on the philosophical questions
about AI -- Is AI “conscious”? Is it able to “understand” what it is
saying/outputting? Its role will continue increasing no matter the
philosophical answer.
Perhaps it does understand, but in a
manner comparable to the way the neutrally-networked substructure of humans
understands things beneath the conscious layer of our awareness. Perhaps our
consciousness or awareness itself is just an interface between our internal
thinking neutral network on the one level, and our external actions and
communications on their basis. In that case, AI may be said to have a somewhat
similar awareness of its own, consisting of its neural network availing itself
of a human-compatible language for completing and outputting its thoughts on
the network layer.
The current AI bubble will probably
burst. It may be bursting as I write. The tech bubble similarly burst decades
ago. Big tech proceeded to recover and far exceed the level at which its
valuation had earlier burst. The same will happen with AI. What will not happen
is that we will be restored the time lost by people – those who comforted
themselves with the thought that “it’s just a bubble” -- to prepare for AI’s
risks and the best methods to reduce them. We have to plow ahead with thinking
and planning -- yes, rush ahead with it -- through all the ups and down and
busts and booms.
Big investors, with massive research
support networks at their disposal, have been willing to take the risk of
plowing already an estimated $19 trillion into investment into AI as a probable
game-changer. They are not stupid people. They find it prudent to bet large,
not small.
This is a measure of how much more we ourselves
need to be willing to take the social risks of talking about AI with foresight
and in directions that will no doubt rub some people the wrong way. We public
policy analysts cannot afford to fail to take the social risk of plowing our
intellectual talents into trying, in full public view, to consider how we can
best increase the balance in future AI of benefit to humanity over danger to
humanity.
We must take courage on this. We must
be bold in talking about it. If AI fails or falls short, we have risked only
our reputations, and only a small part of them at that. It is more likely in
any case that AGI will succeed. Its proof of success will come quickly –
instantaneously, in our biological time-perception framework, despite its
inevitable gradualism from an electronic speed-of-light framework. At the
moment in our time that that happens, everything will be at risk if we have failed
to do our part to find the best ways of coping with it – if we have failed to
precommunicate the best things we can to its precursors for channeling its own
thinking helpfully. Better to risk a bit of reputation and prethink this as
fast as we can.
No comments:
Post a Comment