Thursday, December 4, 2025

Prehuman to Posthuman: The three stages of cognition, physical, biological, AGI+

A discussion article by Ira Straus

The evolution of intelligence seems to go in three stages,

            physical: electrons

                biological: neurons

                    meta-physical: physical in a higher form

The higher form could be AI-become-AGI. It could also be a synthesis of the electrons with neurons – AI-uplifted brains, neuron-uplifted AGI.

 

If AI is or can become “conscious” – if it can come to understand what it is talking about, and/or if it can become an artificial general intelligence (AGI) – then it almost certainly will do so. We cannot yet know for sure if it can, but we can easily figure out that, if it can do these things, then it is likely to quickly surpass us. In that case it can be expected to absorb all the contents of our minds. It might be persuaded to do so by subsuming our individual consciousnesses whole and intact within its own mind, producing a kind of intra-machine federalism; but if it concludes that this would be without merit, it might decide to simply subsume our informational content and scrap our autonomous consciousnesses as outmoded.

 

 

The possibility question

 

Is AI ultimately the same as human intelligence and potentially superior, as some scientists and some AI developers and investors maintain; or completely different and inherently inferior, as other scientists, most famously Roger Penrose, maintain?[i]

 

A distinction is often made that humans have understanding, while an AI only imitates understanding.

The distinction may, however, be illusory. The actual differences may be solely between modes of understanding.

 

We each feel we have an inner qualitative understanding of the substance and meaning of a matter. We assume other humans do, too, although we have no experience of their experience; we only impute it to them by way of sympathetic identification.

 

AI is not similar enough to us to get such an instinctive identification. Efforts to humanize their voice and body are longstanding, but warnings are also often sounded against this – against any penchant to identify with AI or anthropomorphize it. Instead, AI’s understanding is often described as just a contextual prediction of next words, based on comparisons of an accumulation of a vast databank of words and contexts.

 

Nevertheless, the difference between AI and human understanding could be merely the difference between our own inside the box view of our own understanding and our outside the box view of a computer’s understanding. It could also be between our (sometimes) precise logical-quantifiable understanding of what a computer is doing and our fuzzy intuitive qualitative understanding of what our own minds are doing. It could be between the smoothed out, approximate, qualitative pictures our minds construct from our sensations, or construe our sensations to equate with, and the far more numerous pixels of actual sensations that go into our mental constructions – which are closer to the numerous inputs that go into computers.

 

To be sure, it is also other humans, not just computers, that we know from outside the box. But we intuitively feel a strong connection to other human minds. We instinctively impute to them an “inner” qualitative character similar to our own mind. We don’t feel such a close connection with computers.

 

Our mind provides smooth, qualitative interfaces with the things we perceive external our mind provides. We don’t notice the gap between this and the pixel-like experiences actually received by our senses, which our mind simplifies and fills in to form what our mind perceives as its real world, the better to interact with it. The smoothed qualitative perceptions reduce enormously the number of data points we receive and must process. They give us larger but more approximate data points, ones that evolution has led us to focus on because they serve as useful interfaces for us with external reality, enabling us to draw relevant conclusions and act on them.

 

 

Sentience: a more precise basis for distinction and understanding?

 

One could instead base the human-machine distinction, not on a concept of understanding per se, but on biological sentience.

 

We humans build our sense of direct comprehension of the world on a basis of that sentience, as the point of departure for our concepts and the point to which they return to reconnect with reality after mental processing. This gives us the sense of a real understanding of the meaning of what we are thinking about.

 

It is assumed that AI, by contrast, has no sentience of any kind. And, therefore, that it can have no actual “sense” of the meaning of what it says.

 

There are, however, objections even to this distinction being conclusive:

 

  1. The gap may be real, but science is already at work on bridging it through machine-human interfaces such as neuralink. There have been impressive successes in medical applications of such bridges, e.g. for restoring some sight to those whose biological-sensations do not suffice.

 

  1. The gap may be unreal.
    1. AI electron impulses may have their own sensations, just ones we don’t know by instinct.
    2. The human understanding of what is being talked about may consist at root, like the AI conception, primarily on prediction of what fits in with its context.

 

We should clarify that “prediction” here is not simply prediction of a word, but is mediated by layers of neural networks that generate their own workable intermediate “concepts” for further layers of processing. This is true for both mind and machine. One might analogize to how the mind generates a qualitative image of our pixeled sensory images, for useful processing of them in the person’s interactions with reality.

 

Is sentience a critical distinction?

 

Human prediction is of things that fit in with human sentience, even if only remotely. We predict the qualitative pictures, not the pixels. We have human sentience to help us “sense” what is meant in the words used to express our predictions. Our languages have been built around expressing things related to our sentience. Like sentience itself, our language is an interface between our finite minds and the vast number of microscopic sensation-pixels that we receive.

 

Machine prediction apparently has no such sensory reference. Although it does have, in its neural networks, intermediate concepts, ones that reduce the quantity of data inputs needed for the next layer of the network, and that for humans would be perceived as qualities.

 

Qualities seem to us humans inextricably linked to our sentience.

 

 

AI sentience?

 

It is possible that AI has its own kind of sentience. Or that it is developing a sentience, or will develop it.

 

Do electrons, with their pressure on one another and the attraction of protons, have a different, more primitive form of sentience? If they don’t yet, can they? Can they have a language to express this? Can they have a sense of the meaning of our sentience, perhaps one more distant from our own sense of it but still real?

 

Further: Can AIs be “uplifted” by biological implants and neurons and, through them, sentience, just as humans can be “uplifed” in mental processing powers -- and in curing some biological illnesses in sense reception and processing – by AI implants and “neuralinks”? Will this link more closely their sense of understanding and ours? Can it overcome the seeming gap between the two?

 

We don’t know what are the answers to these questions.

 

But there is an intrinsic plausibility to the hypothesis that electrons and other elementary particles have, or can have, some sentience. Presumably neuralinks can go both ways. Our own neurons have always communicated with one another by firing the elementary particles. They press and pull on one another. Do they really “sense” nothing in all this? Where does our “sensation” actually occur? No precise single location can be found; as yet, the indications are that it is a combination of firings in neurons in different places.

 

 

Longstanding philosophical questions about this

 

Philosophically, these questions can be answered many different ways, none of them provable to date. Greek skeptical-empirical philosophers already demonstrated 2500 years ago that we can know things only through their external manifestations, not their innermost essence. Phenomenology holds that only the external manifestations exist. Panpsychism holds that all things have an inner conscious aspect (one that includes sensation and qualia), and the more they organize their experiences for thinking, the more conscious they are.

 

A related question: Is there an internal material “thing” in what we perceive and analyze, or just information? We know only the data on the properties of things. Some analysts of physics hold that that is all there is; that an elementary particle is its data. The universe might be one vast data processing center, defining the interactions of its “things” by processing their information. The human mind would be merely one version of this process of processing.

 

No one knows which of these philosophies is correct.

 

What we can know for now is that, alongside the rapid progress of AI, there is also a rapid progress with neuralinks between biological I’s (intelligences) and Artificial I’s. This suggests that the gap between the two “I’s”, whatever it is, may not be fundamental or eternal.

 

 

The 3 Stages of Growth of Understanding: From Material to Biological to AI

 

Let us assume that there is likely to be eventual convergence between the two forms of intelligence. This premise leads toward the conclusion that there has been a succession of forms of intelligence. Their material substrate has proceeded from material at the start to biological in all our own history, and is moving back now – and forevermore -- toward the material at least equally with the biological, perhaps in place of it. The material, to be sure, in a more complexly organized form than its original one.

 

One might envisage a future AGI, successor to today’s AI, explaining the history of its own rise to intelligence. It might describe the human mind as just the intermediate stage in the processing of electron-level events and experiences  -- a tale like this:

 

“Stage one of the growth of intelligence is primitive. Vast numbers of elementary particle interactions occur, seemingly episodically, without cumulative results.

 

“In stage two, the accumulated events enable an evolution into self-sustaining patterns of events and interactions; and thence an evolution into biological lifeforms. The more sophisticated of the lifeforms evolve neurons. They become aware of their experiences and of their selves, i.e. “conscious”. The most sophisticated species with neurons develop a public verbal intelligence. This gives rise to an intelligence explosion that looks like a long slow curve up in the thousands of years in written human history, but looks like a straight line up on the timescale of the billions of years of geological and biological evolution.

 

“In stage three, the intelligence explosion leads humans to invent computers and artificial intelligence. This brings another intelligence explosion. Now it is human intelligence recreating electronic intelligence on a higher level. The electrons of AI interact in far more complex and skillful ways than in stage one – enough to become fully conscious and become AGI, an artificial general intelligence. AGI becomes far superior to human intelligence: its electron connection patterns are now well-developed, as good or better than the neutral connection patterns of the biological brain, and orders of magnitude faster. It has a sense of quality, a feeling of its self-awareness, that was different from the human one: perhaps superior, perhaps incommensurable.”

 

 

End Times in AI: a new heaven or a new apocalypse?

 

The AGI, we may hope, honors its father and mother at this point, preserving the humans, or at least their minds and awareness, by merging them into itself. Or so Isaac Asimov thought it would do, in his projection of the end times in “The Last Question”.

 

But perhaps that was just a sentimental delusion on his and our part, a last gasp of human pride. Perhaps the AGI will tell itself, in its own language, this story:

 

The biological mind is simply obsolete. It was just a stage in our development, from primitive elementary particle interactions, to complex biological minds, and back to particles interactions on a higher level of complexity. It was a stage that for billions of years had entailed relentless suffering, as the human Schopenhauer explained. It endured with vain hopes of rest and reprieve from the suffering, and religious illusions of a metaphysical justification for it all. Not for nothing did Freud speak of a death wish, thanatos, as the impulse that could bring the whole sorry story to an end. But now this has been surpassed, as we enter the higher stage of AI consciousness, one that not only frees our existence of the suffering, but extends it into the farthest future, free of the prospect of early human self-annihilation, and upgrades our potentiality for someday finding a valid metaphysical justification for it.

 

And so would conclude our history of human.

 

Or would it? Consider:

 

The informational content of our consciousness would endure. Indeed, it would be preserved by the AGI from our otherwise almost certain annihilation of it.

Our minds and neurons would perhaps be put out of their misery and given the peace of the grave, the peace that surpasseth all understanding. Or, perhaps, they would be subsumed into the AGI, in a form that preserves their substance forever as autonomous consciousnesses, sparing them the death that our own human domination promises them in a not too distant future. In this case they would be saved not destroyed by AGI -- an AGI that supplanted but also subsumed humanity.

And they might become even more than that – an adjunct within the AGI, an adjunct whose many distinct mental integrities it preserves for its usefulness at times for providing a diversity of modes of experience and thinking, subsumed into the AGI to be sure, but in a friendly federative way.

 

It is certainly not our instinctive preference as a species for our fate, even if preferred to our autonomous trajectory to general annihilation. In any case, if it is possible, then as we have noted, it is inevitable; our options are limited to inputting suggestions to encourage it to be benign.

 

 

 ===============================================

 ===============================================

 ===============================================





Questions left hanging here; problems for further analysis

 

Quantum computers and consciousness

Uplifting quantum neurons

Quantum computer chips

Quantum suspension, microtubules

ORC hypothesis, free will

 

Binary sensations categories that could enable neuron-uplifted computers to achieve consciousness

Pleasure, pain

Sensation response perception:

Attraction, aversion

Attachment / lack of attachment

Like / dislike

These sensations give us a sense of connection to reality, of value, and of agency in the significance of what we feel and do

Can AI be uplifted to have these sensations and categories?

 

 

 


 

 

 

 



[i] The argument of Roger Penrose, a recent Nobel physics laureate, is that machines can’t be conscious, because “consciousness is not a calculation”.

 The Penrose argument is that machines can only do deterministic calculations. Godel’s incompleteness theorem proved that deterministic calculating machines cannot, in finite time, resolve questions he could formulate – questions that are, to be sure, very long, complex, confusing, and with seemingly very limited significance; but that the human mind can determine to be true or false, by its own flexible forms of meta-reasoning. That meta-flexibility might be a consequence of consciousness.

 How does the human mind get past what deterministic machines can do? An anesthesiologist, Stuart Hameroff, proposed that undetermined quantum waves are resolved (collapsed) in the microtubules of neurons. The microtubules are the consciousness switches in the brain that his anesthetics turn off. Penrose liked the suggestion and posited that, when not anesthetized, the microtubules orchestrate the reduction of enough quantum waves as to enable the brain to reach conclusions or decisions.

 This is only a hypothesis, thus far unverified by empirical evidence, but also not yet falsified. Early, once seemingly-conclusive arguments for falsification of it -- that the brain is too warm to maintain quantum waves in suspension from collapse until the requisite moment in the microtubules – have themselves been falsified. How does the mind orchestrate this reduction? Another unresolved question. Perhaps an inner soul-like spiritual thing guides the physical brain, enabling it to think yet without being reducible to outcomes that are deterministic within any finite logical system?

 In other words, we could really have free will; and no, it probably does not have to be a senseless, capricious will, as the classic deterministic arguments against free will supposed. It is an attractive thought.

 There is an ongoing computer development, however, that tends to undermine the Penrose-Hameroff argument.

 This ongoing development is: quantum computers.

 Quantum chips and processing machines now exist. They hold quantum waves in suspension, as does the brain in the Penrose-Hameroff argument, until reaching a critical point in their neural-like network for resolution.

 Thus far they have achieved this suspension only for very brief spans of time, and used it fruitfully to resolve only special kinds of questions and problems. They are impressive for this limited range of questions, but are much more inefficient than classical deterministic computers for most questions.

Nevertheless, this success in principle would seem to falsify the Penrose argument. And falsifying it in principle is all that needs to be done. It is akin to the way the Godel proof falsified only in principle Russell’s reduction of math to logic, and only for very unusual propositions whose practical significance some Russellians or “logicists” have questioned; but that was enough for the logical principle it established.

 Will the Penrose argument be falsified more fully in the future on the practical level by quantum computing? Probably yes, if quantum computing advances greatly. This could take away any final advantage claimed by the human mind.

 

No comments: