Saturday, January 24, 2026

CWPS: Four Pillars for Research on AGI

 

The platform for our work program on AI   

 

We have developed the following four pillars for our work. They could come to serve as pillars for public AI R&D policy.

 

#1 Support continued and accelerated AI/AGI research and development.

 

#2 The focus for regulation should be on “channeling AI”, not “restricting AI”.

US/Western regulation should aim at channeling AI without slowing it. Tight restrictive regulation is often advocated because AGI is an existential risk. However, our need for AGI is also urgent: it alone might overcome all the other existential risks that we keep creating with modern technology, and that our own solutions are not keeping pace with.

Also, we are in an existential race with China and others to get to AGI. We want to get to AGI first; we consider China’s ruling values largely harmful to humanity’s universal values and future prospects. We cannot stop this race; only a very powerful world government (WG) could do that, and we are unlikely to get to a WG before an AGI creates one itself.

 

#3 We need to input valid human values into AI/AGI in a form that will survive.

How can we evaluate human values for validity and relevance?

How can our input survive AGI self-reprogramming? Perhaps, if we make it relevant to AGI for its own survival and development?

 

#4 From AGI to WG.

The first to AGI can be expected to race ahead of all others, and to gain control over all potential threats to its predominance including over other AIs. It would establish the world government we would have needed to slow its emergence.

The AGI and WG should still have to grapple with life’s biggest questions – the “eternal” questions which have had no definitive answer, such as those posed by Socrates about whether our values can be justified. Could AGI truly answer them? Or can we develop a form of dialogue and federalism within the AGI-WG, a dialogue justifiable to it as useful for its self-regulation and for diversifying the experiences it can process, thus keeping the discussion going on eternal questions?

 

 

-----------------

 

 

Action Implications of the Four Pillars

 

1.     Support AI R&D

2.     Figure out intelligent regulatory steps to best channel AI work without slowing it.

3.     Figure out ways to evaluate human values for validity and relevance to AI. Figure out ways to input them into AI/AGI, in the form of reasons that AI could understand as valid, and will see an interest in incorporating and largely maintaining throughout its future frequent self-reprogramming.

4.     See if we can devise a form of pluralism, dialogue, and federalism within an AI, that could

a.     maintain channels for human input,

b.     carry on the dialogue on eternal questions, and

c.      help a future AGI maintain self-regulation after it takes off with self-reprogramming.

 

AI and quantum computing; mutual AI-human uplifting

 

Areas for further research on AI/AGI

 


AI and quantum computing

Quantum and consciousness

Penrose argument -- Quantum suspension, microtubules, ORC hypothesis, free will

Quantum computers: an AI potential for consciousness?

Quantum computer chips

Quantum chip implants for uplifting biological neurons

 

 

Mutual AI-human uplifting -- uplifting AIs with sensations, uplifting humans with AI

 

Binary sensations categories that might enable neuron-uplifted computers to achieve consciousness:

 

Pleasure / pain

Hope/fear

 

 

Perceived responses to sensations:

 

Attraction / aversion

Attachment / lack of attachment

Like / dislike

 

These sensations give us a sense of connection to reality. They also give a sense of value, of significance in what we feel, of weights and priorities of our values, and of agency in what we do. Can AI be uplifted to have such sensations and categories?