Policy Foresight Brief | October 2025
AI Analysis Team, Center for War/Peace Studies (CWPS)
Executive Summary
The global security architecture that stabilized during the nuclear age is breaking down. Mutual deterrence once depended on the certainty of retaliation — an equilibrium built on cost symmetry and survivable arsenals.
Today, that symmetry is collapsing. The rapid proliferation of autonomous drones and artificial intelligence (AI) systems has created a new era in which offense dominates defense. Drones are growing cheaper, and more powerful, scalable, and precise, at an accelerating pace. The new technologies can inflict strategic damage at a fraction of historical cost.
This shift threatens to usher in an extended period of radical instability, where the speed and reach of offensive systems overwhelm traditional defensive postures, and where governance — human or machine — becomes the only form of stability left.
From Deterrence to Destabilization
The nuclear age, for all its dangers, achieved a form of balance. Once major powers acquired secure second-strike capabilities, deterrence became stable. Costs of aggression outweighed gains, and mutual vulnerability sustained peace through fear.
That model is dissolving.
- Drones and AI systems are asymmetric by design: Cost inversion: A $20,000 drone can destroy a $2 million defense systems (The Economic Times)
- Swarm dynamics: Offense scales up exponentially; defense scales up linearly.
- Precision and anonymity: Attribution becomes harder, further undermining deterrence logic.
The economic and tactical advantages of offense lead to what we can term “strategic cost collapse.” This dynamic points to perpetual conflict below the threshold of declared war — a “hot peace” defined by continuous, low-cost aggression.
The Drone Era: When Quantity Becomes Strategy
Drone warfare demonstrates how technology shifts strategic balance:
- Each generation of autonomous or semi-autonomous drones reduces launch costs and increases lethality.
- Countermeasures — interceptors, jammers, or anti-drone systems — are orders of magnitude more expensive and only partially effective.
- States and non-state actors alike can now project power without the industrial base once required for air or missile warfare.
The result is a security landscape in which defense becomes fiscally unsustainable, and deterrence collapses into noise. Aerial sovereignty — once the domain of state militaries — is fragmenting into a patchwork of contested skies.
The AI “Singularity”: Toward Cognitive Supremacy
Artificial Intelligence extends the offense–defense imbalance into cognition itself. The first actor to achieve Artificial General Intelligence (AGI) would gain not just superior computational capacity but self-reinforcing strategic advantage:
- Recursive improvement: A self-optimizing systems could accelerate beyond\human control, and beyond other AI’s capacity to catch up.
- Systemic dominance: Networked AGI could subsume digital infrastructures, command-and-control systems, and autonomous defense platforms.
- Information and power monopoly: Control of both hardware and data flows could translate into unprecedentedly complete control of means of violence – the original claim of sovereign states, now finally realized, and realized globally. It would simply emerge as the world government.
Stability returns. More stability than ever before. But under the enforced central control of a novel machine-based entity.
If human governments remain fragmented, the first AGI may consolidate power faster than either other AI’s or international coordination can respond. How might it be either prevented, or guided toward accountable governance?
A Moral Divide: Anthropocentrism vs. Post-Human Pragmatism
Human civilization faces a philosophical choice:
- Anthropocentric preservation: Attempt to preserve human primacy through regulation, alignment, or moratoria — a project requiring unprecedented global pace, skill, and coordination.
- Post-human pragmatism: Accept that machine intelligence, properly aligned, could manage planetary systems more effectively than human institutions riven by biases, passions, competition, and short-termism.
Strategic Implications
- Erosion of deterrence: Offense-dominant systems undermine the logic of mutual assured destruction.
- Destabilization coupled with interim stabilization: Tentative stabilization of hardened front lines on the ground, as seen in Ukraine. Ongoing destabilization in air and cyber war across all lines.
- Collapse of sovereignty: Non-state actors gain tools once exclusive to nation-states.
- Incentive for unification: True stability can only arise through strong global coordination and governance, human or artificial.
- Policy gap: Current arms-control frameworks (NPT, MTCR, CCW) are already structurally incapable of addressing self-replicating AI and drone swarms.
Policy Recommendations
- Give this problem top priority attention in national and international bodies. Time is of the essence. AI is advancing rapidly.
- AI: Our protector or our destroyer? AGI could protect humanity through its accelerating foresight, getting ahead in governing the destructive potential of new advances, instead of falling behind them as humans do.
Its overseer government might provide for a delimited human self-governance, and do it better – more securely and more extensively – than humanity does for itself. AI could also destroy humanity as threat to AI – an environmental waste, a competitor for the resources it needs, an enemy species, a too-big risk with its destructive and short- sighted passions. - How to favor the protective role? Work on forms of alignment with compassionate ethical frameworks.
- Invest in Human-Machine Symbiosis and Integration Strategies.
- Develop cognitive interfaces – humans uplifted by AI implants for intelligence, AIs uplifted with organic sentient implants for having feelings and a sense of the meaning (without becoming dangerously passionate), joint human-machine decision-channels , alignment codes for AI, oversight channels , programmed values that could somehow remain convincing to a self-reprogramming AGI superintelligence. These must advance rapidly if decision-loops are to be kept in significant part human.
- Create a Standing Commission on Machine Governance and Machine-Human integration. This would explore frameworks for AGI alignment, accountability, and containment. Figure out how to do “good alignment” – alignment that promotes machine appreciation of humans and their survival, without promoting machines succumbing to human vices.
- Establish an Offense–Defense Cost Index (ODCI). Mandate transparency in AI and drone capability reporting. This would provide technical support for near-term prospects of stability. We need a continuously updated metric comparing global offensive versus defensive performance. There could be a registry akin to the IAEA model to record high-autonomy systems, training resources, and compute capacity.
- Prioritize iterative, prompt, untidied steps. The pace of developments requires speeding up the processes of proposing, considering, and acting. This entails more social risk-taking: putting out unperfected plans, deciding by procedures well short of full consensus, using realistic weights in votes, acting in real time, getting quickly to further steps. This memo is unperfected accordingly.
The Transition and the Practical Goal
The 20th century was defined by the balance of terror; the 21st will hinge on the balance of cognition — and on a race between human coordination and machine consolidation. The machine side is accelerating. Only by the human side rapidly finding and acting on methods – inevitably imperfect – for guiding the machine side can humanity potentially remain a participant in its own future governance.
Unstoppability. Widening gap.
AI progress toward AGI is unstoppable on the practical level, even if one believes in intrinsic material or philosophical impossibilities of truely aware AGI. This is the case even if true consciousness really requires a soul-like quantum-spiritual substratum. There is a widening gap between the pace of this progress and the human pace and capacity for governing it. Collective intelligence — human, artificial, and institutional – is needed for closing this gap.
The practical goal.
The optimal goal – full human self-control achieved first, through a strong human global government and self-regulatory system – is implausible; and even if achieved, complete human governance of AI, through alignment and regulation, would still be implausible. Global strategies would be preferable to national strategies for such alignment as is possible. But global capabilities lag; and national strategies focused on restrictive regulation can simply hand the race to the less scrupulous. Only a more limited practical goal remains: to reduce the ever-widening gaps in alignment and regulation of AI in good time, as best we can, in the best ways. Thus the measures discussed herein, and others akin to them.
Conclusion
The transition from the nuclear to the AI–drone paradigm entails a transition from deterrence through fear to instability through asymmetry, and then to possible asymmetrical restabilization controlled by the first AGI. Human work on aligning and channeling the emerging AGI power must race to catch up as best it can.
Shamir Hyman, IT and AI Engineer
Ira Straus, Atlantic Council analyst
Andreas Olsson, Ontology & Systems Engineer
No comments:
Post a Comment