A draft platform for work on AI and AGI
Shamir Human and Ira Straus
#1 Support continued and accelerated AI/AGI
research and development.
#2 The focus for regulation should be on “channeling AI”, not “restricting AI”.
US/Western regulation should aim at channeling AI without slowing it. AGI
is an existential risk, but we keep creating more and more existential risks
with modern technology, and AI is the only one that might overcome all of them.
Meanwhile, we are in an existential race with China and others to get to AGI. We
cannot stop this race; only a very powerful world government (WG) could do that.
We want to get there first; we think China’s ruling values harmful to human
values and to future prospects.
#3 We need to input valid human values
into AI/AGI in a form that will survive.
How can we
evaluate human values for validity and relevance? How can our input survive AGI
self-reprogramming; perhaps, if we make it relevant to AGI for its own survival
and development?
#4 From AGI to
WG.
The first to AGI can be expected to race ahead of all others, to gain control
over all potential threats to its predominance including over other AIs, and to
establish the world government we would have needed to slow its emergence. The AGI and WG should still have to grapple with life’s biggest questions
– the “eternal” questions posed by Socrates, which seem to have no real
solution. Could AI truly solve them? Or can we develop a form of dialogue and
federalism within the AGI-WG, both for its self-regulation, and to keep the
discussion going on these questions?
Action
Implications
1. Support AI R&D
2. Figure out intelligent regulatory
steps to best channel AI work without slowing it.
3. Figure out ways to evaluate human
values for validity and relevance to AI. Figure out ways to input them into
AI/AGI, in the form of reasons that AI will understand as valid, and will see
an interest in incorporating and maintaining throughout its future frequent
self-reprogramming.
4. See if we can devise a form of
pluralism, dialogue, and federalism inside AIs, that could
a. carry on dialogue on eternal
questions,
b. maintain channels for human input
as potentially relevant, and
c.
help
a future AGI maintain self-regulation after it takes off with
self-reprogramming.
Two further areas of work
1 AI and quantum computing
Quantum and consciousness (Penrose argument)
Quantum suspension, microtubules
ORC hypothesis, free will
Quantum computers and potential for consciousness
Quantum computer chips
Quantum chips for uplifting neurons
2 Mutual uplifting AI-human (AIs
with sensations, humans with AI)
Binary sensations categories that might enable
neuron-uplifted computers to achieve consciousness:
Pleasure / pain
Perceived responses to sensations:
Attraction / aversion
Attachment / lack of attachment
Like / dislike
These sensations give us a sense of connection to
reality, of value, of agency in what we do, of the significance and priority of
what we feel and do. Can AI be uplifted to have such sensations and categories?
No comments:
Post a Comment