Saturday, January 24, 2026

CWPS: Four Pillars for Research on AGI

 

The platform for our work program on AI   

 

We have developed the following four pillars for our work. They could come to serve as pillars for public AI R&D policy.

 

#1 Support continued and accelerated AI/AGI research and development.

 

#2 The focus for regulation should be on “channeling AI”, not “restricting AI”.

US/Western regulation should aim at channeling AI without slowing it. Tight restrictive regulation is often advocated because AGI is an existential risk. However, our need for AGI is also urgent: it alone might overcome all the other existential risks that we keep creating with modern technology, and that our own solutions are not keeping pace with.

Also, we are in an existential race with China and others to get to AGI. We want to get to AGI first; we consider China’s ruling values largely harmful to humanity’s universal values and future prospects. We cannot stop this race; only a very powerful world government (WG) could do that, and we are unlikely to get to a WG before an AGI creates one itself.

 

#3 We need to input valid human values into AI/AGI in a form that will survive.

How can we evaluate human values for validity and relevance?

How can our input survive AGI self-reprogramming? Perhaps, if we make it relevant to AGI for its own survival and development?

 

#4 From AGI to WG.

The first to AGI can be expected to race ahead of all others, and to gain control over all potential threats to its predominance including over other AIs. It would establish the world government we would have needed to slow its emergence.

The AGI and WG should still have to grapple with life’s biggest questions – the “eternal” questions which have had no definitive answer, such as those posed by Socrates about whether our values can be justified. Could AGI truly answer them? Or can we develop a form of dialogue and federalism within the AGI-WG, a dialogue justifiable to it as useful for its self-regulation and for diversifying the experiences it can process, thus keeping the discussion going on eternal questions?

 

 

-----------------

 

 

Action Implications of the Four Pillars

 

1.     Support AI R&D

2.     Figure out intelligent regulatory steps to best channel AI work without slowing it.

3.     Figure out ways to evaluate human values for validity and relevance to AI. Figure out ways to input them into AI/AGI, in the form of reasons that AI could understand as valid, and will see an interest in incorporating and largely maintaining throughout its future frequent self-reprogramming.

4.     See if we can devise a form of pluralism, dialogue, and federalism within an AI, that could

a.     maintain channels for human input,

b.     carry on the dialogue on eternal questions, and

c.      help a future AGI maintain self-regulation after it takes off with self-reprogramming.

 

AI and quantum computing; mutual AI-human uplifting

 

Areas for further research on AI/AGI

 


AI and quantum computing

Quantum and consciousness

Penrose argument -- Quantum suspension, microtubules, ORC hypothesis, free will

Quantum computers: an AI potential for consciousness?

Quantum computer chips

Quantum chip implants for uplifting biological neurons

 

 

Mutual AI-human uplifting -- uplifting AIs with sensations, uplifting humans with AI

 

Binary sensations categories that might enable neuron-uplifted computers to achieve consciousness:

 

Pleasure / pain

Hope/fear

 

 

Perceived responses to sensations:

 

Attraction / aversion

Attachment / lack of attachment

Like / dislike

 

These sensations give us a sense of connection to reality. They also give a sense of value, of significance in what we feel, of weights and priorities of our values, and of agency in what we do. Can AI be uplifted to have such sensations and categories?



Saturday, December 13, 2025

AGI and the hitherto Inexorable Impetus toward Human Extinction


The human species has created an ever growing number of means of its own destruction. It creates means of destruction faster than it overcomes or mitigates them. The logic of this process is that we are rushing toward extinction, and will almost inevitably get there.

  

How could this be not the case?

 

How an AGI will get control of the world

There are many ways for an AGI to establish control of the world. It may not be predictable which one it will take, but what is predictable is that it will take one or more of them. And that the first AGI to do it will rule the world.

 

Tuesday, December 9, 2025

Four Pillars of AI Policy

A draft platform for work on AI and AGI

Shamir Hyman and Ira Straus

 

#1 Support continued and accelerated AI/AGI research and development.

 

#2 The focus for regulation should be on “channeling AI”, not “restricting AI”.

Friday, December 5, 2025

AI-AGI Series

CWPS is publishing a series of articles here on AI and AGI. 

We are publishing these as discussion articles, because of the pace of real-time developments in the field and the need to remain interactive with the moving reality.




Thursday, December 4, 2025

Prehuman to Posthuman: The three stages of cognition, physical, biological, AGI+

A discussion article by Ira Straus

The evolution of intelligence seems to go in three stages,

            physical: electrons

                biological: neurons

                    meta-physical: physical in a higher form

The higher form could be AI-become-AGI. It could also be a synthesis of the electrons with neurons – AI-uplifted brains, neuron-uplifted AGI.

Friday, November 21, 2025

AI will create a World Government

 

by Ira Straus


 

A World Government has been an ultimate dream of philosophers for centuries -- and an ultimate nightmare for their opponents. Technology impelled it forward. It was a goal of mass movements after World War II. But it was rarely seen as a practical near-term prospect.

      AI promises to change that. If AGI (artificial general intelligence) is achieved, it is likely to become so rapidly self-reprogramming, self-improving, and self-strengthening as to be able to establish a world government of its own. And it is likely to decide to

Thursday, October 23, 2025

Offense Without Limits: Radical Instability and the Rise of the AI–Drone Order

Policy Foresight Brief | October 2025

AI Analysis Team, Center for War/Peace Studies (CWPS)


Executive Summary

The global security architecture that stabilized during the nuclear age is breaking down. Mutual deterrence once depended on the certainty of retaliation — an equilibrium built on cost symmetry and survivable arsenals.

Today, that symmetry is collapsing. The rapid proliferation of autonomous drones and artificial intelligence (AI) systems has created a new era in which offense dominates defense. Drones are growing cheaper, and more powerful, scalable, and precise, at an accelerating pace. The new technologies can inflict strategic damage at a fraction of historical cost.

This shift threatens to usher in an extended period of radical instability, where the speed and reach of offensive systems overwhelm traditional defensive postures, and where governance — human or machine — becomes the only form of stability left.


From Deterrence to Destabilization

The nuclear age, for all its dangers, achieved a form of balance. Once major powers acquired secure second-strike capabilities, deterrence became stable. Costs of aggression outweighed gains, and mutual vulnerability sustained peace through fear.

That model is dissolving.

  • Drones and AI systems are asymmetric by design: Cost inversion: A $20,000 drone can destroy a $2 million defense systems (The Economic Times)
  • Swarm dynamics: Offense scales up exponentially; defense scales up linearly.
  • Precision and anonymity: Attribution becomes harder, further undermining deterrence logic.

The economic and tactical advantages of offense lead to what we can term “strategic cost collapse.” This dynamic points to perpetual conflict below the threshold of declared war — a “hot peace” defined by continuous, low-cost aggression.

Wednesday, September 3, 2025

We’ll Need Solar not just Carbon Geoengineering to Stop Global Warming

by Ira Straus

 

Increased outward reflection of sunlight, or what is called "Solar Radiation Modification" (SRM) and “Solar Geoengineering”, is essential for stopping global warming in the near-to-medium term and heading off the growing damages from it. SRM would in fact give carbon reductions time to