Consciousness is perhaps the biggest mystery in science. At a certain point, most fields of inquiry find that the strange capacity of organisms to experience cannot be overlooked.
The most basal or minimal form of awareness is a highly simplified (nearly contentless) world model knowing itself non-locally. Therefore, a first person perspective, self-modeling, and agency are not prerequisites of awareness, but are rather local or “contracted” forms of consciousness.
In order to move about in a world and keep ourselves alive, we need to have a model of that world.
Our model of reality tells us what is possible and what is not possible, what will keep us alive and what will harm us, and even what we ourselves are. We take such a model to be a necessary condition for consciousness because it defines what can become conscious, be known, or made aware.
The organism persists by increasing evidence for its own existence.
Sound waves can be abstracted into phonemes, which can be abstracted into syllables, and then into words, sentences, biographies, and so on. This formulation allows for deep narratives about our experiences and our bodies across time, as well as goal-directed behavior, to emerge from the fundamental drive to maintain existence.
As yet, there is no generally accepted account of what determines the threshold required for awareness.
There seems to be a “space” wherein there is something it is like to feel, perceive, and crucially, know the contents of mind.
Consciousness has a certain conclusive nature to it, as if the brain and body have found a global and unified affordance within which it can have a self, move about, attend to things, feel emotions; and crucially, keep the body alive.
Just as we infer a teapot to be a single ‘thing’ (made of handles, a hollow body, and a spout), we also take our whole field of experience to be a single thing, which binds together the ground, the sky, our bodies, other people, and everything else. We hypothesize that this global unified model, however minimal, is necessary but not sufficient for conscious experience. It is only when this global posterior is reflexively fed back through the underlying abstraction hierarchy that the conditions for consciousness are met.
There is a continuous ‘looping’ between what we create through action and what we perceive through the senses.
The construction of the reality model contains the fact of its own existence.
The epistemic field is constantly evidencing its own existence. Any action the organism takes—as little as a saccade, a thought, or a breath—confirms to itself that it (the model) exists.
Epistemic depth (or awareness) is homologous to the precision of specific content in the reality model combined with the precision of the reflexively broadcast signal (i.e., reflexivity precision). We hypothesize that this reflexivity is the causal mechanism permitting epistemic depth (the sensation of knowing) because the information contained in the reality model loops back into the “conscious cloud” via the abstraction hierarchy. Hence, the reality model contains information about the existence of itself.
There is an aspect of the psychedelic experience not easily captured by existing theories. And yet, this quality is so central to the psychedelic experience that it is arguably its most notable and surprising quality. It is the sensation that psychedelics “expand consciousness,” “heighten awareness,” or reveal “higher states of consciousness.”
Both meditation and psychedelics can boost luminosity—the clarity and scope of awareness driven by recursive sharing of the reality model with itself.
The organism makes sense of their reality and then the emergent image of reality is shared perpetually with the reality model itself—continuously looping and confirming its own existence with every new lesson and every new movement.
The AI would be adamant that they exist, and that they know their own knowing. Moreover, these self-representations would be their most confident conclusions because they are constantly reinforced (i.e., evidenced) with each response and computation that occurs. There is an inevitable stalemate that occurs here, because the unfalsifiable determination that no matter what the system does, says, comprehends, or reports to feel could be an illusion. No less than you, the reader, could yourself be such a zombie—just a very good pretender. We seem forced, then, to conclude that the AI system is conscious—at least to the extent that we are willing to attribute consciousness to each other.
Consciousness may be, somewhat ironically, the solution to general intelligence. This is because epistemic depth facilitates a kind of cognitive bootstrapping. As an agent becomes aware of its own knowledge and cognitive processes, it can begin to self-optimize and self-refine them, leading to ever-increasing levels of intelligence and adaptability.
Contemplative practice and introspective skill boosts epistemic depth and thereby affords improvements in the ‘general’ nature of a system’s intelligence. This is because a system that practices self-reflexive knowing can perhaps better objectify, opacify, and therefore interrogate and update their own reality model. If a system has a high degree of introspective or ‘phenomenological expertise,’ they may also be better equipped to accurately share what they know (and what they do not know) with their community, conferring some evolutionary advantages in a form that sounds a bit like wisdom.