About Consciousness
Technology has always imposed its language upon us.
In the terminal (DOS, Unix) era, you had to learn the language of the system in both directions: you had to learn commands and understand their output. That was the way how you got things done, and boy you could get things done if you knew how to compound upon these techniques.
Over the years, more technology and User Interfaces have brought the language of computers closer to our own, or have sometimes improved familiarity in more circumvent ways. Working with menus and dialogs is second nature to us now but does not map to physical things that existed before software conjured them. We slowly made machines talk languages that required less learning, that felt more intuitive.
Now, the constraint of technology is thinning even more dramatically with the advent of LLM’s and other types of generative AI. These systems can receive and send back information to the user in any language, in images, in voice, and can even interweave between these modes. If you can speak any language, you can compute — or such is the promise of this era.
This raises the philosophical question on whether — and, at what point — there is a ghost in the machine. Just as predictably, it creates political “sides” to which one belongs when trying to answer that question.
Perhaps the interesting and universal question that has emerged is the one at the very center: what is consciousness, anyway? When asking whether there is a ghost in the machine, can we answer what that ghost really is supposed to represent?
The answer seems no, if you look at it from one side. Upend the problem from a different side, and it becomes a kaleidoscope of thoughts displaying insight but lacking clear resolutions. I wrote about this here in a fictional story. Perhaps more succinctly, I feel we seem to care less about our ignorance in this matter than we should!
The slipperiest concept
Whether there can ever be a digital form of consciousness can only be answered properly if there is a good definition of what consciousness is. We need to agree on this at least.
At the same time, those people in the back who appreciate a good paradox will know about Gödel’s incompleteness theorem. They will realise that it is perfectly reasonable — and, more profoundly, even expected — that some statements can sit in reality and still cannot yield a clear “yes” or “no”. It is reasonable to suggest that humans will never be able to formulate a clear definition of consciousness because our axiomatic system of language cannot support such a thing. Likewise, we would not understand a definition given to us by a more elevated entity.
Carving out the definition of consciousness seems more feasible if you take the negative image approach, and you try to define non-consciousness.
Why is a rock or a tree non-conscious? What happens to a conscious creature if it undergoes death or even narcosis, both from the outside and from within?
Not coincidentally, one of the greater minds of our era, Roger Penrose, has had a crack at these issues — he has by the way, also identified the link between the challenges at hand and Gödel’s incompleteness theorem.
Narcosis in particular is deeply interesting for rather surprising reasons. It seems foremost that narcosis can be a physical rather than a chemical process. The inhalation of Xeon gas can cause someone to become unconscious. At the same time there is no way for Xeon gas to bind with anything, as it is a noble gas. You cannot chemically alter a biological thing with such a substance. It can only change things on a physical level.
In a crude example, it could block pathways, or in a more advanced example it can alter quantum mechanical states. Understanding what happens here is currently out of our grasp, although there are, as always, theories.
I hesitate at pointing you to different schools of thought that I know of such as Penrose’s “Orch OR”, as I am not an expert on the existing literature. I can share my own perspective a bit though. So there’s enough to dive into a rabbit hole together…
Consciousness and unity
If you have the gusto for it — and I can guarantee you that to do this well does require a will of power and a certain soundness of mind — I invite you to try to believe, as an exercise, that you are not conscious at all.
In the same manner that you cannot consciously control your heart to beat, let’s pretend that you cannot truly control your thoughts.
I can scarcely ask you to close your eyes, as you need them to read the instructions. Just observe your thoughts arising as the ebb and flow of the tides, as you’re reading this, and as you offshoot into your next thought… Now try to believe your grasp on all of that is an illusion, and these things happen entirely out of your control.
You’d have to pretend you are in fact an algorithm, and one that is equipped with a special self-observing sub-algorithm. This allows you to observe some thoughts from happening, but that is something you somehow miscontrued as a control mechanism that is not really controlling at all. I really invite you to try it, as a sort of meditative state — but step out if you are too uncomfortable with the idea. This is by the way not unlike Theravada Buddhist meditation techniques.
Whether or not control is an illusion or “real” is not the goal of the exercise. The goal is to appreciate a more profound aspect of consciousness that is there regardless of what you believe: unity.
All your thoughts and sub-thoughts eventually come together in one unified bus of thoughts — in technology we call a thing like a USB connection coordinating various bitstreams, a “bus”.
Whatever you believe about how far your free will reaches, you are a collection of independent things that look and feel like one thing because of some aspect of your consciousness.
You may have many inner voices, and depending on the situation, they may feel parallel or serially connected, but there is always a thing that binds all of them together into one entity.
What if unconsciousness is the opposite, the annihilation of this unity? What if this is the physical process caused by the Xeon gas, or by death in a gradual or instant manner? The fact that unity disappears and you dissolve into a disconnected set of things, rendering you no longer conscious?
A core question is whether this sense of unity is emergent or whether it is a structural part of us. Do beetles have it as much as dogs have it, and can you even define gradations of this unity?
I believe one of the most important books anyone can read at any stage of their lives is “Gödel, Escher, Bach: an Eternal Golden Braid”. A remarkable facet of the book focuses on the self-recursive nature of consciousness, which gives some idea on how to quantitatively define that sense of unity — by the depth of the self-recursion.
All of this is very much at odds with current methods of artificial intelligence.
LLM’s display a complete lack of unity. If you insert a question, and an LLM generates an answer, you can literally get each word/token predicted by a completely different machine, which you would never know it happened from the outside: their behaviour would feel any different. Your ChatGPT sessions are perfectly handled by many completely disconnected machines, something which most people do not realize.
LLM’s predict next tokens, and current approaches do not even possess any sort of “memory”. Even if “memories” are added to LLM’s, they are context-free and the memories are simply bits of data that can be appended to a different LLM without any change in behaviour.
When we would design LLM’s that continuously update their own weights, we would be less context-free. If the weight-changing process could be the LLM itself, we could have a self-recurrent system. Currently, nobody is doing this, although I have no doubt that someone will — and please note that such architectures have been devised already in the 90’s, an era that lacked the processing power to develop them deeply but certainly did not lack motivation and imagination to recreate intelligence artificially.
Unity is a key aspect of consciousness, and while it is not easy to grasp, perhaps you might agree that it provides a tentative framework to explain the difference between non-conscious and conscious organisms.
Consciousness and entropy
“Oh God,” I hear you cry, as you fall with me deeper into the hole. “Not another physics thing.” I merely laugh, we are both servants of gravity at this stage and there is no going back.
Let’s assume you are an accountant and you want to fully account for the state of every single atom and molecule in a section of space. Generally speaking, entropy is then a degree of missing information that you have as an accountant. Particularly, if we let nature just take its course, then the amount of entropy will increase as time goes on. It always does. You will gradually start lacking information on where everything is and in what state.
If you imagine a coffee to which milk has just been added, then at first you will know which parts of coffee cannot have any milk, but as the milk intermixes with the coffee, you will start lacking information. That is a fundamental law of nature, and it is very possible (and a belief that I hold) that this law is so fundamental that it really is a definition of time rather than something that happens as time passes.
In general, when you want to think about these things, you have to imagine that nature always just wants things to mix and even out, in a persistent attempt to balance out everything. In a way, it craves a blissful state of noise and a complete lack of structure.
Now, there are exceptions, luckily. There are systems that can, in a local amount of space, lower entropy. It is not infeasible for a human to devise a system that takes the cappuccino and extracts milk and coffee from it again through some far-fetched chemical and mechanical process. When that human does so, they will inevitably increase entropy somewhere else in space — for instance, by creating heat, or exhaust products — but in some local bubble of space, the entropy was lowered, rather than increased as left to its devices.
It can arguably be said that stars do the same, or that in general just gravity lowers entropy in some local area of space — other approaches by the way flip it around and look at gravity as an entropic force.
In general though, living systems can do this in highly diverse forms. One of the most diverse of such forms is the ability for living systems to procreate other living systems, creating a cascading set of bubbles with lower entropy.
It is a good start to define life itself and to look at how trees differ from rocks. For instance, trees are manifestations that are statistically highly unlikely to have occurred, and create and support other highly unlikely things such as the creation of more trees or the support of an ecosystem. Rocks are victims of statistics and do not have any impact on further statistics in such a downwards manner.
Now, there is an interesting question to ask; does artificial intelligence lower entropy in any such way?
To me, it seems the answer is very much no. When I send a query to an LLM, it processes the bits I send and outputs other bits — in the case of an LLM, it actually outputs numbers, which map to some token space. It is always amusing to me that many people think LLM’s produce language when they actually produce numbers, the language is an interface we applied to those numbers. When it processes these bits and bytes, it reacts like any machine, a machine which is in this case made of molten sand. The local entropy rises, mainly through heat, and to nature there will not be anything that has happened that seems to counter-motion the logical processes of thermodynamics, gravity, the nuclear forces, etc. It is a rock, or should I say, a very intricate maze which is still made from rock — quite literally, silicon.
This is certainly opinion rather than gospel, but I can’t escape the feeling that the same does not quite happen when humans or other species perform certain intellectual actions. It feels like, much like any living process, we are able to bend the statistics and cause nature to follow a different course than the path it would like to.
Some would like to involve quantum mechanics at this point, too, to further define why artificial systems would be different, perhaps because they can only interface with reality in a very classical manner — even if of course computers rely fully on quantum-mechanical effects, the in/out interfaces are currently purely classically defined.
Of course, that does leave the more profound question, what about adding biological matter to the current artificial intelligence systems, in strategic spots? Suppose we add a bit of mouse brain or god forbid something more advanced. I think we can all predict, with the experiments of reservoir computing in mind, humanity will definitely apply this and quite undoubtedly already has in lab environments. This will not make defining intelligence and consciousness any easier.
Perhaps, our most profound intellectual accomplishment as human beings will be to accept that there is a grey zone between consciousness and non-consciousness. It seems to me that the only sensible two sides to take is either that this is the case, or to believe in a complete absolution of those concepts, to believe that nothing is conscious, including yourself. Of course, there are those solipsists who may entertain a third viewpoint but I suggest they go their own way.
Let me know your thoughts, this is a fun problem to think about!

"To be deterministic or not, or something in between."
Might consciousness be one of the few things capable of bending those rules (entropy, ...), even if subtly? If so, this could be where the lines between determinism and non-determinism blur, suggesting that something more profound exists in the grey zone you describe.
Consciousness might occupy a spectrum (?), so my question to others: are you more conscious than your pet? Why (not)?
Fascinating!
Diving further into the metaphorical rabbit hole: what are your thoughts on organisational consciousness?
I've been pondering about the concept.
Groups of people that come together develop traits over time such as (muscle) memory, complex decision taking, system adaptiveness and other forms of emergent behavior that allow the group to achieve higher-level outcomes that no single individual could.
Unity is an essential trait for this higher level consciousness too: an organizational mission, a family line, a tribal survival need or a soccer league scoreboard. There's always a shared purpose driving the group's consciousness.