Introduction
Bright light. No self. No Christof. No memories. No thought. No future. No past. No time. No space. There was just terror and ecstasy.
What if you’re one of the world’s most famous neuroscientists having studied consciousness theoretically all your life, and then you have this profound transcendent experience?
It felt like a near-death experience.
Okay. How many seconds?
Three breaths, my field of view starts breaking up into hexagonals, black, and I couldn’t take the fourth breath anymore because I was going down in the black hole.
Out of scientific curiosity, Christof Koch took one of the most potent psychedelics on our planet, 5-MeO-DMT.
If I claim to be a student of consciousness, and here we have a technique that can rapidly, and dramatically, and sometimes transformatively, change your experience of the world, well, then I for sure want to experience it. I’m the universe! Is it true that I broke through some dissociative boundary that now suddenly enabled this brain to access the universal mind?
I sat down with Christof Koch to talk about psychedelic experiences and integrated information theory (IIT). But we first had to get down to the basics of consciousness.
There’s no Turing test for consciousness. The problem of consciousness is not just what most people think: oh, you have to explain how anything can be conscious. That is a problem. But then you also have to explain in each case: why is a particular conscious experience the way it is?
Yeah. That’s interesting, yeah.
Whether I’m on mushrooms or angry, these are different experiences. And you have to explain how I can have different experiences.
We tend to believe that if a system behaves conscious, it might be conscious. Now, IIT doesn’t want to take consciousness at face value.
It’s not about behavior. Ultimately, you have to look at the hardware level. You have to look at the level where the rubber meets the road. You actually have to look at one transistor acting on other transistors.
According to IIT, consciousness has to do with the interconnectedness of the hardware of a system—its integrated information. It measures this as a Phi (Φ). Now, what makes the theory widely applicable is that any system, not only brains, has a Phi.
Consciousness may be much more widespread than we think. A single-celled bacteria may already feel like something.
Now, where IIT (to me) becomes really fascinating is that it has no theoretical objections to the idea of a higher form of mind that extends beyond our brains. And a larger consciousness, as you said earlier, would wipe out sort of, if we, the two of us, connect and it becomes this new maximum integrated information, it will wipe out our conscious experience.
Correct. Suddenly, the complexity—this Phi measure of our integrated brain—will exceed that of my brain by itself and of your brain by itself. At that point, what will happen? Hans? Gone. Christoph? Gone. Instead there will be this new über-mind, okay—this new amalgamation of Hans and Christoph—that’ll see out of four eyes, that’ll have two mouths, et cetera.
Christof Koch is a brilliant, open-minded scientist. But what made this conversation really stand out for me is that he also knows the moment where art sometimes better describes the depth of human consciousness than science. And as a German, he knows his classics.
To quote from the second act of Tristan und Isolde: “Selbst dann bin ich die Welt.” That’s exactly the title of my next book: Then I Am Myself the World. Because then I experience the world and me as being identical.
Did you have that experience?
Yes, I did.
Welcome to the Essentia Foundation’s YouTube channel, where we share with you all science relevant to analytic idealism. Now, since Christof Koch is such a world expert on consciousness, this will be a wide-ranging conversation touching upon neuroscience, consciousness, metaphysics, psychedelics, and of course integrated information theory. In the description below you can find all the topics we’ve discussed and also a link to Christoph Koch’s new book, Then I Am Myself the World. May I ask you to like and subscribe, as it really helps us to grow this channel and do more interviews like these. Now, without further ado, here is my full conversation with Christof Koch.
Integrated Information Theory
Christoph, welcome to our channel, and it’s great to have you here.
Thank you. Happy to be here with you, Hans.
It’s interesting, actually. I did a documentary a couple of years ago about the possibility of conscious AI. I just mentioned to you talking about it to Roger Penrose and Bernardo Kastrup. And I remember now that I sort of just, on a surface level, of course came across IIT, and I sort of wrote it off as a materialist fantasy that, somehow, out of computation, consciousness arises. Maybe a strange question, but do you get that often? That people sort of interpret IIT—the theory that you of course are famous for on working on with Giulio Tononi—as a materialist theory that brain produces mind?
There are all sorts of confusions about IIT, partly because IIT is a non-trivial theory. I can’t just explain it, you know, like saying, “Well, it’s quantum mechanics,” or something like that. So it’s a very non-trivial theory. It has certain axioms, certain assumptions, which one has to understand. And it has interesting powerful implications. So it’s not easy to understand. So that means that lots of people misunderstand it in all sorts of expected (and always surprising, very unexpected) ways, where IIT says fundamentally the opposite from what people say.
So the example you mentioned: IIT in some sense can be argued to say that consciousness is not computable. There is no software running on a digital computer as currently conceived—a von Neumann machine—no matter how complex, no matter… assume it simulates the human brain, assume it simulates your brain. So it’s software, let’s say Python code, running in the cloud or a supercomputer or wherever, and it mimics you. So of course it’ll mimic everything you say, because it actually models the brain. It models all the neurons and then models, you know, Broca’s area and, you know, it talks. And of course it’ll say, “Of course I’ll see.” But IIT would say: no, it’s not a computation. It does not arise from mere computation being performed on a Turing machine.
So this is, however, the dominant paradigm of our day and age. It’s called computational functionalism, or machine functionalism. It’s a metaphysical assumption that anything that has a function can be instantiated on a machine like a Turing machine—like, you know, all computers are good models of Turing machines. The one you carry presumably in your pocket is a good example of a Turing machine. And so you can ask: are machines conscious today? And, you know, ChatGPT? We all know, right, that it’s quite intelligent. And most people, or the people who are concerning themselves, would say, “Well, maybe not now. But it’s just a question of let’s wait another, you know, ChatGPT 4.0 or 5.0 that they’re working on, right? Soon it’ll be conscious.”
And it’s true. Soon these machines will have all the verbal appearances of a very intelligent human being. In fact, soon machines will be able to do anything you can do, anything any human can do, but they will never be what we are. They will never be conscious. And that’s one of the clear conclusions that IIT comes through: that (at least digital machine as currently conceived) will never be conscious.
Very interesting conclusion. And I think you agree there with Penrose, right? This also comes down to the fundamental difference between intelligence and consciousness, of course. And I think it’s nice for people to really unpack and try in this conversation to really unpack IIT. And I think it’s a benefit to our audience, and I’ve tried my best in understanding it. But also I have a lot of questions. So you already mentioned the axioms. We could go into those, but perhaps best first to explain sort of a central thing within your theory, which is Phi: the concept of integrated information. So what exactly is that?
So with IIT, you have to start with the basics. What exists? So IIT, for better or worse, is a rare theory that starts with a very explicit ontology. And I think ultimately any theory of consciousness now in distinction of some, you know, I have some idea what consciousness is—oh, it’s quantum mechanics, it’s forty Hertz, it’s whatever it is. That’s not a theory. That’s a vague working hypothesis. It may even be correct, but it’s not a theory, right? A theory needs to explain in a fundamental way why some pieces of matter—I mean, the fact of the matter is the following: certain pieces of matter seem to be associated with consciousness. In fact, the only way—so, my brain seems to be associated with consciousness, so is yours. But the theory sort of starts even earlier. The theory starts with the one thing that we are undoubtedly sure of. In fact, the only thing we’re really sure of. That’s the starting point of the theory, namely consciousness, right?
So, you know, it’s the oldest deduction in one of the best known deductions in Western philosophy: cogito ergo sum, right? By René Descartes. “I know I exist because I’m conscious.” I can describe the world, and even if I’m a scientist, I look at what? I read off measurements off a device, right? A read off numbers from a computer. So everything I do, everything I do, ultimately reverts back: it’s a conscious experience. And so, strictly, the only thing I’ve direct acquaintance with are my conscious experiences and the way they feel, right? So this is what makes them special. They feel a particular way. When I’m in love, or when I’m angry, or when I’m in pain—those are all specific ways of feeling. And so the challenge is to explain how these feelings come about.
And so IIT starts: okay, these feelings exist. They exist for themselves. In other words, they don’t depend on others. They don’t depend on you or God or anything else. They exist intrinsically for me. And they have a particular structure. They are the way they are. I mean, right now, if I have a toothache, it feels like a very specific way. But it’s also highly differentiated. It’s different from every other possible experience I could be having. If I look at this visual experience, it’s incredibly rich. There are all these objects there. There’s left, right, up, down. There’s in front, there’s behind, there’s you. You’re nodding, you’re moving. I can see your depth. There are all sorts of things in color. All of that has to be explained. But it’s also unitary. It’s a single experience. It’s not that there’s a left experience, and there’s a right experience, and an up experience, and a low experience, and an experience of sitting in a chair. It’s all a single experience. And certain things are not in my experience. It’s very definite. Right now, I don’t feel my blood pressure. In fact, I never have conscious access to my blood pressure. But also right now, I don’t experience pain. I don’t experience ecstasy. It’s a very particular thing. So, even a vague experience, like a hunch—I walk inside a dark room, and I know something is off. Well, even that is a very particular type of definite experience.
So it starts with these five observations of what any possible conscious experience—of a human, or an alien, of anyone, of a cat, dog—the claim is that anyone conscious experience has to be of this character. And then it says: well, how can I put that into some operational way that I can look at a system, at a mechanism (like my brain, or like a computer chip), and ascertain whether this mechanism, where these neurons are on and those neurons are off, is experiencing something? And for that it goes back to the notion of what exists. And it says: what exists ultimately is what has causal power upon others. Like in physics, we know something exists because, let’s say, it attracts by gravitational force, or because it has electric charge, it attracts or it repels. So ultimately, anything exists—and this goes back to a comment by Plato—anything exists, and the measure of its existence is how much causal power it exerts upon others. And if it doesn’t really exert causal powers upon others, then I don’t need to bother with it. Yet, then, you know, maybe it exists, but if it has no way it can influence anything, and nothing can influence it, well, why bother about it? Why worry about it?
And so it looks at a specific thing, like the brain, or the brain of an animal, or brain of a—or it looks at a CPU of a computer, or any other system—and it tries to study it by saying: okay, let me look at the different elements of this system. The neurons or the transistors, or whatever they may be. And how much, if I turn those two neurons on and those neurons off, how is that going to affect the system in the next step? In the next step? In the next step afterwards? And you can sort of formalize that in a formal definition of causal power. How much does this system have causal power to influence its own future or to be influenced by its past?
This is sort of, yeah, because—sorry to interrupt you, but this is important, right? This is intrinsic causal power.
—causal power. Intrinsic means the power the system has not upon others (that’s external power), but upon itself. You take a system like this piece of cake, and this system doesn’t have a lot of causal power upon itself because it doesn’t really change. I mean, it will slowly fade away over the days, but otherwise it doesn’t really do something.
The danger of your mouth comes, it has no ability to go.
Yeah, but that’s external. Exactly, yeah. That’s all extrinsic. Now, if I take a brain, or if I take an active computer chip—not one where the power’s off, but one who is active—you can see it’s in a particular state, and then you can observe it as a transition, you know, as it does what it does, intrinsic to its own nature. Or I can look at your brain, in principle. We do this all the time in the lab. We look at the brains of animals, in our case. We also get pieces of human brain from surgery, and we study them so you can see how things change. And so you can see to what extent can, in one state, deterministically always go into two other states. And from there it goes into one state or into four other states. So in each of these cases I can compare sort of what’s called, I can compute—the technical expression is the transition probability matrix (TPM), which essentially gives me a measure of causal power. How much does this system have the power to influence its own future and to be influenced by its recent past?
And you can quantify that. So you can quantify it, and you can quantify it. You can quantify it by this number called Phi, the Greek letter Phi, which is a pure number, it’s either zero or it’s positive. When it’s zero, the system has a factor of no integrated information. So Phi is this thing called integrated information. It’s not information in the Shannon sense. So information in the Shannon sense—when people say information in either disk or this thing has a terabyte of memory—well, that’s external memory. Sorry, that’s information in the Shannon sense. That’s information that you send from a source through a noisy channel, through a sender, or you can store. Integrated information is information—it’s not that Shannon type information. It’s more: how much does the system… it relates much more closer to causal power, to the ability of the system to influence itself, and also to its complexity. If you have systems that are very complex, they tend to have a high Phi, and if you have systems that are much simpler, it’ll tend to have a low Phi. If the system has a causal power of zero, if the Phi or the intrinsic integrated information is zero, it strictly speaking does not exist as a system for itself. You can still consider it a system, but it’s really, let’s say two parts that are put next to each other; they don’t interact. So this system is totally independent from this system. You can treat them formally as a system, but you don’t get any added value. So this system should really be considered as two subsystems, or as four.
That’s important what you just said: it exists for itself. So Phi means that it exists for itself.
In some measure, the larger the Phi, the more it exists for itself, the more it is one system, the more it is integrated. So that’s a quantitative aspect of consciousness, of a particular system. The qualitative aspect of the system, what we mean by—because, I mean, if you think about what’s the difference between me being in love and something like mass, okay? Or weight, or mass. So this cup has certain mass. I can quantify this, there’s a number you can put on it. And then its weight is the mass acting in a gravitational field like the planet Earth. Being in love is not a quantity. It doesn’t have a, you know… you can’t say, I mean…
There’s no spatial dimensions.
How big is it, how tall is it, how heavy is it, what temperature does my feeling have? No, it has a particular feeling, and it feels very different from being angry, or from being in pain, or all of that state. So this theory says: well, you look at a particular system, like a brain, and you analyze the causal power that these various sub-components have. And what conceptually you do, you unfold the causal power by considering all possible ways that the various components of the system have of interacting with each other. All possible ways—which is a very, very large number. All possible relations that this neuron and this neuron interacting with this neuron and that neuron, and there’s this, and this, and this. And so even if you have just a small network, the total number of possible distinctions and relations grows super exponential: 2nn. So very, very large. And you don’t see those, but those are real. In a sense, they’re more real than anything else, because those is the causal power that the system exerts upon itself. That is what consciousness is.
The theory says: the way anyone explains fields—so I look at again this brain with these neurons on and those neurons off. It is in a particular state. It feels happy, let’s say, or sad, or depressed, or whatever. That has to be explained—or is explained, according to the theory—but all these myriads of causal power of all the various components of the relevant set of neurons that the theory calls complex, that ultimately is a physical substrate of the conscious experience, you have to unfold all of those. And so this is—this unfolded, intrinsic causal power of all the various components.—that is essentially every one aspect of this unfolded, think about a high-dimensional crystal, or a fantastic flower, that unfolds in a very lawful way from the causal relationship of all the components of the system, and all combinations of all possible components. This is identical to your experience, the experience of this particular system.
Of love, for instance.
Or being in love.
So we can be that specific? I mean, for instance, I’ve interviewed Donald Hoffman, who likes to quip at physicalists, like: give me the algorithm of the taste of garlic and exactly garlic, or chocolate and exactly chocolate. Are you now saying that, if we unfold this causal structure, that one day we might find it?
It is identical. The claim of the theory is: this, the causal structure, maps one to one, perfectly, completely, onto your conscious experience of garlic. So in other words, you look at my experience of garlic, and I look at the unfolded causal structure, and one to one they map, there’s nothing left over. There isn’t a little bit of feeling here that isn’t mapped. Otherwise, the theory would be—
But mapping here is important as well, right? It’s not identical. It’s a map of, so it’s a correlation.
No, it’s—I don’t know. I… no, it’s not a correlation. It’s totally… it’s an identity. It’s sort of the central identity. It’s an explanatory identity. I’m not making any claims about ontological identity.
I find that a bit hard to understand, but I—
It’s an explanatory identity. Take something simpler than garlic: take space. Now, empty space—just space, doesn’t have to be filled with anything. It can be a blackboard, an empty blackboard, nothing written on it. It already is incredibly complex. Because if you think about it, if you look at it, if you actually explore the phenomenology of space, it can be seen space, it can be heard space, it can be touched space, they’re all the same. There’s neighborhood relationship, right? There’s this neighborhood and this neighborhood, they overlap. There’s distances, and this is close by, this is farther away, this is farther away. There’s left, there’s right, there’s up, there’s down, there’s point, there’s light. Incredibly complicated. All of that is in hand. If I even look at a total black sky, it’s already very complex, filled with all the phenomenology. This has to be explained.
And the theory in this simple example of phenomenology of space—which is immensely rich. People say, “Well, it’s easy to describe. It’s just empty.” Yeah, but it’s easy to describe for you, because you have space and I have spatial perception. So of course it’s easy to describe. Then you can summarize it. And they mistake the fact that I can talk to you, because we are both conscious people, with the words, “Well, it’s just empty space.” That’s it. As compared—you know, think about… who’s your great national painter?
Rembrandt?
Pieter Bruegel. Let’s look at Bruegel, okay? Beautiful, these very dense—you know, I’m thinking, let’s say, of hunters returning, you know, three men returning from a winter hunt. Remember that painting? Okay? So you can say it’s very complex. Where to describe this, you know, there are all these dogs, there are the hunters, they have nothing with them, they look disappointed, there’s this village in the background, it’s snow, it’s winter, all of that. Right? So it’s full of phenomenology. And the claim is: if I, right now, I’m thinking of it, or I’m looking at it, the unfolded causal power is to completely explain that. So it’s very rich, but we have to explain it. So the problem of consciousness is not just what most people think: “Oh, you have to explain how anything can be conscious.” That is a problem. But then you also have to explain in each case: why is the particular conscious experience the way it is?
Yeah. That’s interesting, yeah.
Whether I’m on mushrooms or angry, these are different experiences, and you have to explain how I can have different experiences.
Yeah. And I think—you say, for instance, that it’s not a process, right? It’s not a computation, consciousness. But it is the unfolded structure itself. And I just found it abstract. Please help me understand what you are seeing there.
Okay. It’s not the brain itself. The brain is a substrate of the conscious experience. But I have to disappoint you: it’s dark inside my brain. So when I see, right now, I can close my eyes and see Bruegel—it’s still dark inside my brain. It’s goo. It looks like overcooked tofu or cauliflower, plus some red blood, right? So it’s not the substrate that’s conscious. Yeah, I can study the substrate. I can track the footprints of consciousness—those are called neural correlates of consciousness. That’s all very interesting. But they are not identical to my experience. No, no, IIT says: it is the unfolded causal power of that particular substrate in this particular state that maps perfectly onto my conscious experience,; that in some sense is my conscious experience. Not the synaptic setting, not the firing of the neurons that underlie that. Those may all be mechanistically relevant, but those are not identical to my experience. My brain, my physical stuff, it’s just physical goo inside the brain, but out of which, out of the causal power rises this conscious experience, or constitutes the conscious experience that I’m having.
And how can we map this unfolded structure? I mean, what does it look like? Is it sort of a mathematical object we can grasp, or… I’m just trying to—
Well, you could do that. It’s a form. It’s a form in a… I mean, I think sometimes I use a word, sort of crystal qualia structure unfolded in qualia space. I mean, conceptually, because it grows so rapidly, I mean, the number of different states in even a small network, if you look at all the possible combinations of networks, can have is so astronomically large. And, of course, after three dimensions things become very difficult for us and it’s very difficult to actually comprehend. But of course we have to do that using the computer, using machine learning and the AI, as tools to help us.
But ultimately, the way to think about it: it’s a mechanism, which means that they exert some causal power on each other. Like, if those two neurons fire because they’re connected to this neuron, then this neuron will fire and this neuron will be depressed. And so I can measure each of these, at least in principle. And out of all the combinations of all these possible causal interactions, this is what my conscious experience is. But you can test it, because you can, for example, systematically manipulate these conscious interactions with various techniques. You know, I can go inside with various fancy medical techniques or experimental techniques to turn these neurons on or off. And, in principle, I can—in practice, in some simple cases—I can predict to see what will happen if I do this manipulation and that manipulation. And the theory should say, in principle, what will this do to the experience? And so you can ask the animal indirectly through an experiment, or the person directly: “What do you observe?”
And for people to understand, let’s take the example of an object. Let’s say our phone or my smartwatch.
Tea. The tea box.
The tea, yeah. Let’s talk about the unfolded causal structure of the teapot versus our brain, and just to understand how this works in practice; applying IIT.
Well, so, you look at what’s called the transition probability matrix. So you have a network—let’s say four neurons; very simple—here, here, here, here, and they’re connected in some way. But, in principle, I can write down a big logic table.
Yeah.
So let’s say these are just logical gates, to make it even simpler: either on or off. And they connect to these other logical gates. So in principle I can write down: if the system is 0001, or the system is 1111, what will be the next state, and the next state thereafter? So I can write down this transition probability matrix. And from that I can compute—it’s perfectly unambiguous. In fact, that’s Python code that anyone can download. You can just compute it. It’s very simple. There’s no math. So it’s very operational. It’s an operational measure. It doesn’t depend on any woo-woo. There’s no magic. There’s no woo-woo. There’s no—
And there are no neurons and transistors here, but molecules that sort of have a temperature or states.
Well, I mean, all this is made out of atoms and molecules. The difference here, there’s no really intrinsic… I mean, it is in one state. It’s this ceramic. It was burned to be. In fact, you don’t want it to be malleable, right? And it wouldn’t be a very useful teapot. So here nothing’s happening. It exists in one particular state and is staying in this state. So that’s not very interesting. This is not going to give you a rise in consciousness.
Phi zero?
Phi zero, yes.
For sure, yeah.
Yes. So, for example, unlike some philosophers, like panpsychism, they have the challenge: why isn’t this conscious? If it’s true that all atoms—whatever they may be ultimately—have a tiny itsy-bitsy measure of consciousness (and this has a lot, admittedly), then you have to explain why this thing isn’t conscious. Well, I—
This is interesting, though, because I think in your theory, IIT previously has been associated with panpsychism, right, by some people?
Yes, because it shares some panpsychist intuition. For example, it shares the intuition that consciousness may be much more widespread than we think. That even very, very simple system—so for example, a bacteria, a single-cell bacteria—may already feel like something. Now, what does it mean? A single-cell bacteria has already, let’s say, ten million proteins of a thousand different types. No one has ever simulated—I mean, there’s people who attempt to simulate single bacteria, but no one has ever done it completely because we can’t do it right now. It’s already a vast complexity. Well, IIT would say: in principle, at least, one has to look at it—nobody has done it, as far as I know. Look at all the myriad of interactions at the molecular scale, or involving all the different proteins interacting with each other. But my gut feeling tells me: yeah, there’s some complexity there, and it may have a measure of Phi, let’s say, two, okay? Or 2.25, whatever. Right? So what does it mean? It doesn’t mean it thinks about the weekend, or it’s a love, or it’s angry, but what it means, it may be something as simple as “this versus that.” Something very simple. And once the bacteria walls dissolve—let’s say because they get attacked by antibiotics—and the bacteria “dies,” then it doesn’t feel like anything anymore.
And then you can look at things like flies—the fruit fly that’s been studied everywhere—or bees. They’re already much more complicated. A bee has a million neurons. You have roughly 100,000 times more neurons. The bee is already a very complex object. Bees can recognize humans, they have a waggle dance, they can communicate with each other. Amazing complexity. If you look at the bee—so the bee has a brain about the size of this, alright? It has vast complexity, and it’s quite likely that the bee—and if you look at the circuitry, it’s ten times denser than the cortex that makes up our brain. And it has heavy, heavy, heavy interactions and feedback of the sort that would maximize Phi. So it’s quite likely that the bee feels like something. Now again, it’s not going to think about the weekend and have a lot of sort of a model of self, but it probably has, in fact, it has some hormones that are related to our hormones like oxytocin. So it may feel when it’s just taking, when she’s just taking a drink from a nectar, and she’s flying in the sun back home to its hive, it may have some dim feeling of being happy.
And so then, as you ascend, you know, as the brain becomes more and more complex, you get more and more complexity, until you get to us or closely related species, where it’s so big that consciousness begins to reflect upon itself and you get self-consciousness. Now, it’s interesting. Some people, if you just ask them, “What’s consciousness?” They say, “Oh, it means thinking about myself.” No. That is one form of consciousness, no question about it, but it’s something that only adults do. Children, for example—I think you have two young children.
Yeah, they’re not there yet, no.
Very little. They’re clearly conscious, but very little consciousness of self. It’s typically something that takes many, many years to develop. And, as you get older, you keep on developing, you can think about—
And hard to get rid of, right, once you have it?
Yeah. I mean, that’s a drawback. You become totally obsessed about yourself. But then you can have other experiences, like psychedelics, that can be very transformative precisely to the extent that you lose yourself; you become selfless. And then suddenly you realize: you can be highly conscious, right—you can have a mystical experience, in fact—and there’s no self. There’s no one. There’s still a conscious experience, but then myself (the Christof that’s always watching, always judging, always doing things) that’s totally gone. And it’s a wonderful experience. So that shows us: you don’t need self-consciousness to be conscious. Yes, we have self-consciousness. Yes, it’s important for us for long-term goals and planning and all of that. But it can also be a burden.
Noösphere: A Mind of Minds
I’d love to go into what you just mentioned, but getting back to sort of you telling me that the teapot has none, and the association with panpsychism has to do with the intuition, right, that consciousness is more widespread, then the difference between IIT and panpsychism would be that IIT has a sort of measurable theory. It can quantify sort of the consciousness. Whereas panpsychism touches in the dark, so to speak?
Well, yeah, it’s a scientific theory. The big problem—well, there’s several problems with panpsychism. One is the combination problem. You’re conscious, right? I know I’m conscious, okay? But there isn’t any über-consciousness. There isn’t a Hans–Christoph amalgamation, right? I mean, at least if there is, I don’t know, and you also don’t know about it, okay? Well, for panpsychism that’s a problem, because, you know, why isn’t everything—I mean, consciousness seems to be confined to this, and to you, so they, you know—
Where’s the boundary?
I don’t—well, why is there a boundary at all? Why isn’t it all some one big happy universal mind? Because all the universe has (you know, at its bottom level) consciousness, so why don’t we all aggregate into this super aggregate?
Yeah.
Okay, so IIT addresses this problem squarely by saying: well, what is the substrate of, let’s say, of my conscious mind in the brain? And I can look at all possible—I can look at the entire brain, I can just look at two neurons, or ten neurons, or a hundred neurons, or whatever, you know. I can look at all sorts of combinations. IIT says the only—it comes out of the axiom—the only thing that exists for itself, which is what consciousness ultimately is, is the substrate that has maximum integrated information. It’s a maximum Phi. So you look at all possible substrates here. You look at this and this and this and this. And then it is the one that constitutes the Christof—me, my conscious mind—is the one that maximizes the all of that.\
Now, you have a similar thing going on in your brain, right? You also are conscious. So IIT would say: well, it’s the substrate in your brain that maximizes sort of the amount of complexity, if you want—the amount of intrinsic causal power. And were we to include both of our brains—of course, we are causally interacting. You asked me a question, I respond. But that interaction is dwarfed by the massive amount of interactions in my brain and within your brain. And so if we consider, of course, IIT says, yeah, sure, you can consider the causal power of your brain and my brain and our bodies, but that will be very low because it’s not just a simple, you don’t get complexity by just adding all the subcomponents. You have to look at all the possible interactions. And your brain and my brain interact much, much more slower than, let’s say, my left brain and my right brain. You might know that there are 100 million fibers that link the two brains called the corpus callosum, right? And that is vastly more causal power, vastly more interaction across that bridge that links the left and the right brain than across you and I. That’s why I am conscious and you are conscious, but there’s no über-consciousness.
Now, you can make an interesting experiment. You can do what’s called brain bridging, right? So in brain bridging, I now run wires—I mean, you know, think about Neuralink; you know, the Elon Musk company—but fifty years from now.
Yeah, exactly. So we connect.
We start connecting. So first I have a few wires that go from my visual cortex to your visual cortex and vice versa. Okay. So if I do it, if I have a small number of neurons, what happens? Well, I will be able to see (dimly at first) what you see. So it’s like, you know, VR goggles. So I can see whatever I’m conscious of, but then I see superimposed something, you know, from your point of view.
Is that what it really looks like? We know that?
No, we don’t. This is what you would expect. Because if I access my visual brain, it’ll be the neurons that make up the form of what I see. They relate to that. And it’s same with auditory and with my smell, okay? But I’m still clearly Christof, but I see these interesting patterns. “Huh, that’s how I look from the outside.” And same with you.
Now, I can add more and more wires. So this IT calculus Phi always asks at any given point in time for any set of mechanisms, you always look at the maximum. It’s always the maximum that is the—and the substrate and the complex. And so you can always, I mean, conceptually, you can always compute: okay, so my brain connected with the wires to your brain. Now, if I add lots and lots and lots of wires, at some point, if I add one more wire, suddenly the complexity (this Phi measure of our integrated brain) will exceed that of my brain by itself and of your brain by itself. At that point, what will happen? Hans? Gone. Christof? Gone. Instead, there will be this new über-mind, okay, this new amalgamation of Hans and Christoph that’ll see out of four eyes, that will have two mouths, four limbs, et cetera.
This is super interesting. Also, of course, I say this from an analytic idealist perspective, right? Because the problem idealism has is, of course: when we have universal mind, how to get to Christof and Hans? So it would also—
And so it’s really the opposite of the combination problem of panpsychism.
Exactly, yeah. Could this also be a mechanism—if mind is universal—in how it dissociates itself? Could we work the other way around? Is there, sort of, then, a form of decoupling that sort of—
Split brain experiments. So these experiments were first done in Chicago in 1939 to ameliorate major epileptic disorders. When you start having a seizure here, and then it spreads across this corpus callosum into the other, and then people get the grand mal—you know, the thrashing about. And so what surgeons first developed there is a radical procedure. First partial, and then in some cases complete commissurotomy. So they cut all, you know, two hundred million fibers. And then surgeons were at first surprised to not find major deficit. I mean, I just cut the brain literally in half into two hemispheres. Some falling out because it’s still inside the skull. And they expected major effects. But, you know, once people get over the injury and heal, you talk to them and they… “How are you doing?” And they seem to be fine, and they seem to, you know, they ask, “How do you see?” Because suddenly you would expect, so in us, like in most mammals, the left, you know, what I get on my left side goes to the right brain, what I got from the right side goes to my left brain. People said, “Well, I should be blind.” But people are not.
But then it turns out if you do careful experiments that are well controlled, you get what’s called the split-brain syndrome, namely that if you give, let’s say, something like an object like this into my left hand without me seeing it, and you ask me, “What is it, Christof?” I say, “Well, it’s, you know… it’s… you know… it’s one of those things.” But if you give it to me on the right hand, I immediately say, “It’s a phone.” Why?
It’s your language.
Yeah. So it turns out the brain that you talk to, the consciousness that you talk to, is your linguistic dominant one. In most of us it’s the left hemisphere. Yes. And so the right hand crosses over to the left. And so in my right hand, my linguistic brain has access to the information that it’s a phone. On the left hand, the information goes to the right hemisphere. But the left hemisphere knows that there’s something there but doesn’t have any specific information. And so it turns out these people have massive problems, but they can compensate it by turning and—you’ve got to remember, there are two conscious minds in this one brain now. But from a phenomenological point of view, it’s like being on the dark side of the moon. That this hemisphere has its own conscious experience. It’s not linguistic, but it can sing, it can answer certain simple questions. But it probably can see to just relatively normally, but without having the linguistic power that the left hemisphere has. So it’s really the inverse, because that’s really now you’ve dissociated by cutting. And by doing this artificial corpus—think about the artificial corpus callosum, right, in my Gedankenexperiment, where I link my brain to your brain—I can overcome this dissociation.
So, at least according to IIT, the only way to overcome this dissociation—we could all be a group mind, which means a single mind—but we would have to use some technological means to directly interface my neurons with your neurons with the neurons of everyone else.
I understand. And to get back to what people get, or really understand—because we talked about Phi, the low Phi of the teapot. And people now think that computers, because they can simulate consciousness (I’m talking about CatGPT 3, 4, you know the examples), that they are conscious. What I find interesting about your theory is that you can measure the Phi of those systems and actually say: no, it’s quite low. It might be very super intelligent, but it has a low Phi, and therefore a low consciousness, right? Am I saying this correctly?
Yeah. So you can in fact draw a sort of a diagram. One axis you plot whatever measure of intelligence you want—say IQ, or whatever measure you prefer—and on the other axis you can plot, well, IIT says Phi, some measure of consciousness, going from zero to being superconscious. And in our entire history on this planet, what evolution has brought forth are creatures that sort of live somewhere in the broad middle of the diagonal—namely: as you get creatures, because the complexity of their brains and all biological brains, all brains, are characterized by heavy, heavy feedback, heavy, heavy interconnection that would give rise to large Phis. So we’ve gotten used to it. And the products of biological evolution were intelligence, sort of co—so intelligence ultimately is about doing: doing things at different timescales, doing things immediately, jumping away when the truck appears, but also planning for college or planning for retirement. All of that is intelligence. Ultimately it’s about doing. Consciousness is about beings: about being in love, or being angry, or being in pain, or seeing, or hearing, et cetera. Evolution has sort of been along this diagonal, where you get co-evolution of more intelligence—like if you go from simple creatures to complicated there is more consciousness. Now we find ourselves in a very strange position for the first time in the history of this planet. And we can go along this bottom here, this axis, but there’s no evidence of consciousness, but there’s evidence of intelligence, ever larger intelligence, and there’s no reason why we shouldn’t get to superintelligence defined as being much smarter than you and I, or being as smart as you and I, but being able to do things a thousand times faster. Right? If you can look at all the facts and it takes a week to come to a decision, and the computer can do it in a second, then clearly this computer by any practical measure will be superintelligent. So there’s no problem with that—but no consciousness.
Why do I say no consciousness? Well, because you have to look at—so the theory says… there’s no Turing test for consciousness. What Turing tests is intelligence: how much can you pretend to be like a human? What was designed? Can I tell this, on a teletype, can I tell the computer from a machine. And I think ChatGPT has clearly shown we can. We know this problem in schools, where teachers can’t even tell whether their students use ChatGPT or not. IIT says: no, it’s not about behavior. Ultimately you have to look at the hardware level. You have to look at the level where the rubber meets the road, where the causal action happens. Not at the simulation of the causal action, but you actually have to look at one transistor acting on other transistors.
And the fact of the matter is: when you look at the actual CPUs of all current von Neumann machines, typically you have one transistor that is connected to two to four other transistors in the GPU. Versus in a human brain, or a typical mammalian brain, where you have one neuron that speaks to 50,000 neurons and gets input from 50,000 neurons, and the next neuron overlaps maybe 49,500 of those; are overlapped. You can get vast complexity of the sort that you wouldn’t get. And so therefore IIT says such a system—although it can compute anything we can compute, and probably much faster and better—will not be conscious.
Now, two things. It’s not there’s something magical about brains. It’s not that we have a soul-like substance. Because in principle, if you build silicon in the image of humans—so-called neuromorphic engineering, with sort of the same type of design, but now in silicon rather with than with bilipid membranes—then in principle you can get human-level consciousness. There’s nothing magical about it. But you can’t simulate it. And this is a challenge for most of today, because we learned over the last fifty years: well, I can simulate anything. Today I can buy a video game that looks almost completely realistic. I can go into 3D and I can play with an avatar, and they look ever more realistic. So why can’t I do that with consciousness?
Also, the example I always give. I have a friend. She’s an astrophysicist, and she simulates Einstein’s field equation of general relativity to simulate black holes. There’s a black hole at the center of our galaxy in the direction of Sagittarius A*, and it weighs like three million solar masses, so it’s as heavy as 3 million of our sun. Its mass is such that it twists spacetime around it to such an extent that nothing can escape, not even light. That’s why it’s called a black hole. You can simulate that, but you don’t have to be afraid that the computer programmer is going to be sucked into the computer simulation.
Well, you can ask—well, some people think that’s a stupid question. It’s a simulation. Well, that’s exactly my point. Although you can simulate certain aspects of gravity—you can simulate, you know, in principle, what it would be to fall onto, and how heavy it is, and you can simulate everything, and the dynamics of the other stars at the center of our galaxy—but you don’t get the causal power. You cannot simulate the causal power. You have to constitute. You would actually have to have a mass of three million solar masses in order to get the same effect. And so this was consciousness. Consciousness is not a computation. It’s actual causal power. So, in order to get human level consciousness, you need a system that has the intrinsic causal power of a human brain.
Yeah, that’s really interesting. Really interesting.
If you’re just—from a physical point of view, it’s not easy. It’s been very challenging to define what objects and what parts are. There’s no natural way of doing it, because ultimately it’s all just a bunch of atoms that are all together. Well, now you can describe here, well, it’s very easy to separate this. You know, if you look at the force needed to separate this from the table, it’s very difficult to separate, and you require much more force to separate the table. So you can sort of have some practical measures like that that are based on how much force you need to apply to separate things. Like, you know, you can apply somewhat more force and this will come off.
But IIT is this very precise measure of the whole, the parts, and what constitutes the whole. It is the whole is defined by maximum of intrinsic causal power. And per IIT, every whole is a conscious entity by itself. That isn’t a conscious entity. This is not the, well, I don’t know.
Every whole is a conscious entity of itself, because when you say—
A whole is something that has integrated. Maybe a little bit or maybe a lot, like my brain.
Because when you say maximum integrated information, I immediately think of what? So I already think I already presuppose a system, I already presuppose a boundary. I think that’s what’s happening in my mind trying to understand.
Very good! So in principle, you have to look at all systems at all levels from—is it at the atomic level? Is it at the molecular level? Is it at the cellular organelle level? Is it at the level of neurons? Is it at the level of assemblies or microcolumns or brain regions? Or is it at the level of humans? Prinz’s theory says you have to do it across all levels, across all space, across all time. Is it a millisecond or nanosecond or a second? You have to look at all possible scales. So in that sense, it’s totally neutral. It says it is a scale in spacetime or granular level that maximizes the causal information.
Yeah. But we have sort of these different spots where the information is maximally integrated because that—
At all possible timescales.
Yeah, because how else can we sort of have different consciousnesses?
Well, as I said, you’re different from my consciousness because the integrated information across our brains is minute compared to the integrated information within the brain. But even within the brain, it’s not my whole brain. I can lose, for example—well we know this empirically, you can have a cortiplegia, you’re paralyzed, confined to a wheelchair, but you’re still conscious. So we know the spinal cord isn’t necessarily for consciousness.
And again, so this also helps in trying to understand how your theory can account for the fact that I have this experience of just being Hans, a person, a self sitting here, and I’m not having this experience of a gazillion of cells who all have a different, very small—
And correct! So—it’s a very good point, Hans—so the whole is different from the individual parts. What is conscious, what I have a conscious experience of, is really the whole, this entire, you know, maximum of cause effect power. The individual neurons by themselves, they are not conscious. It’s not that you have individual consciousness, and then you pack them all together and then you get the superconsciousness. No, it’s only the system, only the setup mechanisms that has maximum causal effect at that particular scale, not at the scale above, not at the scale below. It’s very specific.
And a larger consciousness, as you said earlier, would wipe out, sort of, if we, the two of us connected, if it becomes this new maximum integrated, it will wipe out our conscious experience.
Correct. It’s like the Borg. Remember the Borg in the Star Trek universe? So Captain Kirk gets absorbed, and (I think very correctly in this movie) then he loses his own identity. Because when people say group mind, well, they mean different things by group mind. The idea of a group mind is: there is really only one mind. There’s one conscious experience. By definition, I cannot conceive what it would be to be two superimposed into one. That doesn’t make sense.
But to me—this is perhaps my religious background, thinking about sort of God or a supe organism, a superconsciousness, mind at large, whatever. If it would have a conscious experience of itself, it would then, are you saying that it would then wipe out sort of, the consciousnesses that would constitute it?
If they causally interact with God, yes, they would—I don’t know “wiped out.” That’s not the right term. It would be a single, it would be a truly mystical experience. To quote from the second act of Tristan und Isolde, “Selbst dann bin ich die Welt.” That’s exactly the title of my next book. Then I Am the World. Because then I experience the world and me as being identical.
DMT Trip
Did you have that experience?
Yes, I did. I did. On a jungle beach, in a distant land, doing strange things, yes.
When was that?
Oh, that was January of last year in Bahia, in Brazil.
We’re talking about your DMT trip, right? And you write about it, so I feel free to ask you about it.
This was ayahuasca, as part of this religious ceremony, the Santo Daime religious ceremony.
Had you ever done psychedelics before?
Yes, I’ve done—yes I have. Yeah.
And what did you—what made you decide to do something so intense? I’ve done DMT ayahuasca myself, and it’s been the most profound experience in my life. And I could read in your new book that it had a similar effect on you. And the first question was: what made you do it? Was it from a professional interest that you wanted to see?
Curiosity. But also, it’s strange. So, I only came to psychedelics late in life. I had one experience, a profound beautiful one ten years ago, and then nothing till the year of the pandemic. And although I’ve studied consciousness since 1980—I published my first paper with Francis Crick in 1990 on it, so I’ve been doing this for a long, long time—but I just never had the occasion. I didn’t know anyone. I grew up, you know, “Drugs are bad,” “Drugs are evil,” “Alcohol is okay, but drugs are really evil.” So I had no experience of it until I encountered, you know, with this modern renaissance, there were so many people. And then I got curious. I wanted to experience 5-MeO-DMT, you know, which is this very near-death experiences. And because I said, “Well, I am just curious,” but also, if I claim to be student of consciousness, in here we have a technique that can rapidly and dramatically—and sometimes transformatively—change your experience of the world. Well, then I for sure want to experience it.
Because, you know, the motto of the Royal Society is nullius in verba: “take no one’s word for it.” And for consciousness, you know, I want to see the data, right? Show me the data. Now, for consciousness, it’s not just I want to hear your account, but, you know, there’s a difference between studying black holes or viruses or brains, and studying consciousness. Ultimately, I can only directly experience it. So I wanted to directly experience this.
And a little bit about sort of—I think you even mentioned that in your book: that it was an ontological shock. What was the shock about? What’s the shock factor?
You know, I’ve gotten very—so I’m now, you know, at the time was 65. I’m a successful physicist and brain scientist. So I’ve gotten very used to my metaphysics, which is sort of a version of… well, it sits somewhere between idealism and physicalism. I mean, IIT, you know, people always ask, “What is IIT?” You know, because they want to sort of classify it. Well, it assumes that consciousness is primary, but it also assumes there’s mechanisms like brains. And then it seeks to study what’s the character of mechanisms that can give rise to consciousness? Why consciousness? Well, because consciousness, according to IIT, ultimately the only thing that is absolute is existence. So, so IIT out of its ontology, really differentiates between relative and absolute existence. Absolute existence is existence for itself. When you’re conscious, I exist for myself.
Yeah.
Tonight, I’m going to go to sleep. I’m going to wake up at some point. I’m going to dream. It’s absolute existence. It isn’t existence for anyone else, but it’s the only existence I know of. My sleeping body still exists, but only for others. When I’m, for example, in a deep sleep, my body still exists, but I don’t exist for myself, because if you wake me up out of a truly deep sleep, typically I tell you, “No, I didn’t experience anything. Just nothing.” But it exists for others. That’s relative existence. And really, that’s a fundamental divide in ontology between things that exist for themselves and those that don’t.
Okay. So I grow up very comfortable with this thing: that the brain produces consciousness, and my professional career has been trying to find what’s the substrate of consciousness, which neurons are involved, and what type of neurons and what type of circuit, and can you measure it and, you know, in people that have lost consciousness to predict whether they’re going to recover, et cetera, et cetera. But then I had this experience where… it’s very difficult to put into words. It’s sort of a mind at large. It reminded me little bit of Huxley—you know, The Doors of Perception—where I felt suddenly I had access… there was no more self, so I had escaped the gravitational field of the self. It’s always there, it’s always there, it always gets into the way. Right? So I was weightless. I was selfless. There was no more thought of Christof, no memories, nothing. And I was sort of surfing this panoply of galaxies. I don’t know—I felt elated. It was sort of an ecstatic experience. I was sitting there for twenty minutes until it ended and I came down. And I somehow tapped into—I felt, I mean, I experienced tapping into having this. I’m the universe. I know it sounds trite and it sounds—but…
No, to me it does not. To me it definitely does not. I’ve been there in sort of that way, but it’s just the shock. But that’s, to me, the shock, what to make of it, right? Because that experience is so real. And when you get out of it, you know it was sort of your mind “on drugs.” And then again, for me at least, sort of the old narrative kicks in again. You also say the dictum, “No brain, never mind,” right? A brain, never mind.
No brain, never mind.
Yeah. And then that narrative starts again. And then, when it fades—but it’s so real, right? And what sort of a question for you is, because you understand the brain, you understand consciousness, you have spent your whole career theorizing about this. How do you theorize about this experience? What can you make of it?
Well, so, I was totally perplexed. Completely perplexed for several days. I called up my very good friend, Giulio Tononi, who’s the person who started Integrated Information—the chief architect of Integrated Information Theory—just to try to make some sense of it. He tried to bring me down by saying, “Well, do you believe it was your brain that caused it?” I said, “Well, yeah, it’s something I did take.” You know, I participated in this ceremony. You’ve done it—you know, they take like ten hours, twelve hours, and you chant, you dance, and you partake of Santo Daime. I know all of that. So in my normal moments, I say, “Yeah, it was my brain.” When you have this psychologically very powerful ceremony, you know, that puts you in this very powerful expectancy, effectiveness, group effect. And then you take the ayahuasca brew, and in the combination of that, my brain. So the conservative interpretation is, what it shows: that brains are capable of having extraordinary experiences.
Yeah.
Okay. And now I’ve had this myself. It’s not only that I read about other people having it, but I know now firsthand that brains are capable of this experience. But maybe—and this is where the ontological shock comes in—maybe this is really the black swan. Right? All it takes is one black swan to wonder: well, maybe there’s something fundamentally wrong with this point of view? And so I was trying to make sense of it. I read, I had Schopenhauer with me, I read Schopenhauer again, and he makes this beautiful comment about, you know, in the, in this—what I think he calls an aesthetic moment—when the person merges, the apprehender merges with the apprehender; when the knowing knower merges with the knowing, you become a pure, will-less subject. And that’s exactly what I had. And then I was looking around, and then I came upon the writings of Bernardo Kastrup, the analytic philosopher, and was really, you know, [???] was really catalyzed by that.
So now I’m in this in-between state from a metaphysical point of view where, because I study the brain—you know, the nice thing with the brain, you can take it and you can put electrodes in it (whether it’s a human or animal brain), you can record it, you can poke it, you can stimulate it. And there are lawful relationships between this brain and conscious experience. Like, if you stimulate my back here, in most cases, you’ll get visual experiences. And if I lose bits of this visual brain because of stroke or gunshot, whatever, I will have loss of visual experience. So, by some measure, there’s these very lawful relationships between brains. And so doesn’t this imply that ultimately it’s all a product of the brain substrate? Well, yeah, it seems to be true. But then I just experience, so how do I explain that? Is it really true that there is a universal mind? So why don’t I experience it right now?
Yeah, yeah.
I would like to explain. And I don’t explain, right? The tragedy of our lives is: we are all like monads, right?
Leibniz, yeah.
And there are no windows in this monad. I don’t have even a little access to your conscious experience. I can think, you know, I can infer what you’re probably conscious of, because, you know, given what you say and how you react and stuff. But I never know.
But you’re open. It’s nice that you use the word “window,” right? Because I think in a psychedelic science space, there are broadly speaking two camps. And one will say that the experience is a mirror. It’s just your brain on drugs, right? So you get back what’s there in a strange way as you’re in this mirror hall, and your subconscious and dreams, everything will come up. And then there are people more like Huxley, that will say, from a filter hypothesis standpoint, no, it’s a window into a mind at large, or whatever. And so you are actually saying: I’m somewhat now in—torn.
Torn.
Yeah. And I can see why you’re torn, as you have built, of course, your whole career sort of on the mirror aspect, so to speak.
Well, or the other fact that the brain is ultimately the generator, or it’s a substrate, put more neutral. You need a substrate. No brain, never mind. Consciousness, the ultimate reality. You know, in fact, it’s just verified once again. But it requires a substrate. IIT makes a precise prediction on the boundary, the whole/part relationship.
Yeah. Which is great. Yeah.
Which is great. But it says: the whole does not extend across the entire cosmos. It’s right now based, it’s a small part or some part of my brain, and doesn’t include your brain, which is why I’m not you, which is why I don’t know what your conscious experiences are. So there would have to be some sort of boundary. I mean, is it true, according to analytic idealism, that I broke through some dissociative boundary that now suddenly enabled this brain to access the universal mind? Or did my little consciousness become part of the universal mind?
The brain on the substances—brain activity goes down, right? Isn’t this sort of shown in multiple studies? So we have brain activity going down. So you have a big anomaly when you say the brain produces the mind. And then people get back with these experiences of oneness, of no-selfness, unity with you name it. You see, why I’m fired up also has to do with sort of like how I’m primed, right? I have a religious background. So there’s always this wanting to sort of reestablish God, the faith that I have lost. And I felt sort of—
I must tell you—so, I grew up Catholic. I was an altar boy and said, you know, Confiteor, and Latin mass, and all that. I educated my kids in the faith. I’m lapsed. I’m not a Catholic anymore. But I noticed many of the ayahuasca—so when I went, it was with a whole bunch of tech people from West Coast tech.
Yeah.
Yeah, I guess it was tech people, and then people who all had an ex-religious background. Two different communities. Particularly in the tech community it is quite widespread.
Yeah.
But anyway, we can ask. Yeah, we all have our biases, it’s true. But we still are—as scientists, or just as thinking people—we want to arrive at an explanation that unifies everything we know about the universe, right? I can’t—I mean, this is what bothered me with when I was young as a Catholic. I have my faith for Sunday, you know: it’s a very nice 2,000 years of tradition, all that. But then the day, as when I learned in school, or what I later on learned in university, or in the lab, was sort of totally dissociated. And I want the single explanation for everything. I don’t want—so we have to be able to take these extraordinary experiences and try to incorporate them into our science and also into our metaphysics.
The thing with analytic idealism that I find challenging, (A) is a claim that you’re never truly unconscious. All it means is your egoic consciousness isn’t there. I find that very difficult to test. I mean, how do I test this ever? Okay, my liver may be conscious, but it’s not telling me. Okay, so what do I do with that? Well, I can’t make a scientific theory out of that because it’s completely untestable. If I don’t access my liver, then…. So I don’t find this very useful. And very early on in one of the first papers that Francis Crick—you know, the guy who discovered the double helix of DNA, and I started working with him on this. We wrote early on explicitly: we’re not going to worry if my spinal cord or my gut is conscious, not telling me. We’re simply not going to worry about that. We’re going to worry about: okay, the state said when I can see this cake, and sometimes when I don’t see the cake. And I can experimentally, you know, I can arrange an experimental condition where, although the cake is in front of me and I’m looking at it, sometimes I see it and sometimes I don’t by doing various manipulations. Okay, this gives me, for instance, a perfect road into studying consciousness. Because now the input is always the same, but half the time I see it, half the time I don’t. So now I could put me in—or people, or animals—I can put them in a magnet, see where’s the difference between seeing and not seeing, or seeing cake, or seeing—typically it’s done with faces, or other things, or hearing. All sorts of visual and auditory phenomena and illusions, et cetera, where you can try to identify the substrate of consciousness. So that’s what I and many, many people have done.
But that always identifies consciousness with the brain. Because indeed, this is where it seems all the evidence suggests, unlike cultures that thought it was in my heart, my heart isn’t conscious. I can get a transplant and, to first order, you know, people of course are traumatized by their transplant operation, et cetera, and everything they’ve went on before. But to first order, people don’t have memories of the heart of the donor. They’re still them. Although there are—
There are these anecdotal stories, right, of people saying that they had personality traits or weird sensations they get after a transplant?
Yeah, which is true now. You’ve got to remember: (A) transplant is a massive, massive operation, right? (B) Transplant, you already have a history of a major heart attack or something. So these are not everyday happy people that just one day get switched out. (B) Of course, it’s very difficult to, you know, there are all these fibers, the vagus nerve, the descending vagus nerve, that connect in a bidirectional way the heart with the brain, it’s very difficult to replicate those. So yes, you’re not quite the same person. No question about it. Empirically, it would be very difficult. But they’re not massive… they still see, they still have memories of their old personality, et cetera, et cetera. So again, I find it very difficult to believe that the heart has its own consciousness. So—
What about the filter hypothesis, right? You already mentioned Huxley and The Doors of Perception; sort of that the brain somehow, those substances you took, do something with our filter, expanding sort of our consciousness, that more frequencies come in. What do you make… do you find that plausible in a sense, or…?
At least we can in principle test them now. What does it mean if you think about this in detail? Does it mean that I’m now sensitive to frequencies like ultraviolet, infrared? Well, that doesn’t seem to be the case, right? So what is it when, you know, so assuming—I had to explain this for mind at large—how did that happen? Did I have new EM detectors? Electromagnetic detectors? Well, probably not. Well, what was it that changed? Presumably something physical. I mean, we know psychedelics bind to certain types of serotonin, et cetera, et cetera, et cetera. So how can that lead to some…? So I’m puzzled. I’m very puzzled.
I recently watched the miniseries on Netflix that you can now watch on Amazon, called Chernobyl—you know, the…
Oh, yeah, about the—yeah, I haven’t seen it.
The nuclear disaster. It’s very good. It’s dark, but it’s very good. It’s very close—if you know something about the technique and what happens—very close to the actual event. In the first show there’s this really interesting… when the explosion just happened and people are totally confused, the head of the reactor, who’s a Communist Party functionaire, refuses to accept the evidence. Now, he has a background in nuclear engineering, and he asks the operator, “You’re telling me that the core exploded?” Right? Which is what happened. “Can you tell me any way on which the core of an RMPK (this is the type of reactor that the Russians built) can explode?” And the operator says, “None. No, I can’t.” Well, he says, “There you go. So obviously it couldn’t have exploded.” But then he says, “I’m sorry, it did explode.” So just because you don’t have a theoretical—so sometimes facts are facts. And just because you don’t have a theoretical explanation doesn’t mean you’re going to throw out the fact. Why didn’t it actually happen? Because I don’t have a good explanation for it.
Bridging the Divide
Yeah, you don’t want to look through the telescope. Yeah, that’s it. But what is your view? What can IIT say about that state, about the psychedelic state? Did you see any links there?
Yeah, I mean IIT can say, for instance, why should a state of a brain that has very little activity, according to IIT, will still be conscious?
Yeah.
So IIT is very clear. It’s not the total amount of brain activity. Not at all.
It’s not the amount of computation. If we take a computer metaphor: it’s not the amount of voltage going through the system.
No, it’s not the amount of actual potential or hemodynamic activity. No, ultimately it has to do with the differentiated patterns and how much causal power. And a brain that doesn’t fire, for instance, in extreme case, you can imagine, none of the—I mean, in a limiting case, unlikely to happen. But in the limiting case, you can imagine that very few of my neurons are actually active right now. But they could be active. There’s nothing wrong with them inherently, but they’re not active.
They’re not, yeah.
This is a particular conscious state which is unlikely to happen in normal life, because in normal life there’s always something going on, or I’m thinking of something. So it’s extremely unlikely, but would be possibly a very strange state, because we’re extremely rare in these circumstances. And maybe this is what people experience during certain types of meditation. You know, when you do advanced meditation, when you have, you know, what they call pure presence, no thought, no perception, no self. Well, maybe that approximates a state like that. So IIT can certainly say something about that, unlike the dominant theories, like Global Workspace, where there’s nothing to communicate, there’s nothing to broadcast, it won’t be conscious.
Yeah, okay, so Global Workspace really builds on that computation metaphor: there has to be sort of processing going on.
Very explicitly. They say it explicitly. It’s very explicit in the… yes.
So are you saying that IIT could account for near-death experiences, right, in that sense?
So it depends what you mean; how you interpret them. There’s no question in my mind—and IIT in principle has no problem with the fact—that, under a certain condition that may relate to anoxia, ischemia, et cetera, you know, when the brain is shutting down. But the brain is still causal active. So even though the brain activity may be down, the brain can potentially still be active. If the brain, because of anoxia, is so severely that it can—because it’s deprived, let’s say, in the extreme case, all the ATP and all, you know, that’s the energy currency, or all the oxygen, so neurons cannot fire anymore, then there’s no more substrate. And then there’s no experience. That would be very close to being completely a deep coma, completely. What you see in a deep coma. People are no—you know, no consciousness there, people don’t respond to you. You move and you shine a bright light in their eye, and the pupil reflexes, et cetera, et cetera.
What IIT—at least under [???], and I cannot imagine how it would be otherwise—couldn’t account for how people, it is… so the claim is, okay, there’s a claim… well, there’s several claims. One claim is totally believable: that people have extraordinary experiences, that they can transform their lives for the better, either immediately or years after. They lose fear of death. All of that, I believe.
Check. Yeah.
Check, no problem. The much more troubling claim is that the brain was actually isoelectric, which means causally it’s now ineffective. There are no more causal powers. Then IIT would say: no causal power, no conscious experiences. As simple as that. But the claim is that there’s this—what’s it called?—extra, non-localized consciousness, right? That now you tap into this non-localized, and you can have all these external experiences, you can have your life review, you can, you know, you can think, you can track other people, you can know what they’re thinking. You can see them, remote viewing, all sorts of things that are sort of certainly beyond any conventional science. And IIT doesn’t have any way to explain that.
That’s a difficult one. Yeah. So out-of-body-experiences or extrasensory perception, all the sort of Dean Radin stuff, you don’t see IIT—
Not happening in a brain that’s causally incapacitated. No. In a normal brain, yeah, you can have all sorts of experiences. I mean, we have a—right, you can have—
And do you take those empirical findings as—do you regard them as facts and just say IIT cannot explain for them, or do you also—
No, the fact of the matter is that people report these experiences. That I accept. I always have. I have no problem with them. I’m—yes. I’m intrigued by these claims. That team from [???], in particular in this book, and, you know, I’ve read some literature and I’m aware what people claim. I’m much more skeptical about about this non-local consciousness when the brain is flat. I know lots of people want to believe it, of course. But it’s also not easy to assign the timing of the conscious experience. So, you have a heart attack. You know, you have your clinical team working on you. People are screaming, shouting, you get pumped with sedatives, et cetera. You wake up two hours later in the clinic. And then at some point you say, “I had these experiences.” Okay. I believe you. Now, when exactly did they happen? It’s very difficult, because memory under these conditions is extremely unreliable, in particular the timing of memory. Did it happen during? Did it happen while your brain was totally flat? Or did it happen afterwards as you rebooted, as your brain rebooted? And so, because that would fundamentally violate anything I know about neuroscience.
And yes, and then people say, “Well, it’s a transmitter.” You know? It’s a radio transmitter theory. It goes back. William James had this already, right? In his book The Varieties of Religious Experiences. Yeah, there’s this universal consciousness, and my brain tunes into it. But, just like a radio can be broken and the radio waves are still out there, consciousness is still out there. Yeah, that really would required for me a fundamental—because, I mean, what’s a substrate for this universal consciousness? And without a substrate, I mean, there has to be something there.
Yeah, exactly. No, I really like the fact. I mean, I think—
It’s challenging.
It’s challenging. That’s what I mean.
I mean, I do agree there are lots of testimonials, and I don’t know what to make of them. The people can have remote viewing. They can affect time. They can go back in time or they can even predict. I mean, all of that fundamentally violates everything I know about physics. So is there a way I would be much more—is there a way that I can induce this condition? Can I bring you to my, lab and I do something with you, and now you can have this experience, but a little bit more reliable?
That would be cool, if we can sort of—
Well, yeah, because then I can study it systematically. I can see what type of personality, or what type of state you have, or what type of psychological, how important is set, how important is setting, right? Which part of the brain is essential for that? I can do all of this. It’s a brain tree. Now I can put electrodes in you and other EEG devices and FMRIs.
You are already sort of, now you are already leading a team of how many people at the Allen Institute? How many people are you?
Well, I stepped down from my executive duties. At the time when I led it there were 330 brain scientists.
So mega. And—
Now it’s 700.
And is it sort of a question—when we talk about studying these altered states of consciousness—it’s not about the money, right? It’s just that we lack sort of the scientific understanding? Or would you say, I mean, if I would give you like a couple billion of dollars, right? The stuff we put into the new colliders that we want to build at CERN to go even smaller into finding subatomic particles. If we would have that amount of money into neuroscience—
To do what?
Yeah, exactly: to better understand these altered states. Or…?
So you can certainly put that money—and industry might, you know, given this year we may see approval of MDMA-assisted psychotherapy in the US. Next year would be psilocybin-assisted psychotherapy for depression or general anxiety disorders. And so industry is going to put a whole lot more money into that. But that doesn’t address the metaphysical issue. So that doesn’t address these extraordinary experiences that don’t happen on a psychedelic, right? These are these near-death experiences. And so there you would have to find a way how you can reliably, controllably, reversibly, and safely induce a near-death experience and bring the subject back without killing, you know, ten percent of your subjects. That wouldn’t quite pass muster.
And so in principle it seems to me, why couldn’t we do it? I mean, if people have it in the OR, you know, according to Pim van Lommel, there are now eighteen percent of his people in his study, right, of Dutch cardiac arrest patients—why can’t we find a procedure to do this, safely of course, but systematically? Because then I can really study it again and again.
And would you agree that the best thing that science can do in this front is to be metaphysically agnostic, right? So to just openly—or…
It’s… I was just recently involved in sort of a public attempted shaming IIT and trying to cancel IIT. It’s very—and some of the people said, “Oh, IIT is all these metaphysical stuff. I’m a scientist. I don’t have metaphysics. I’m pure. I’m pure. I just go where the data goes.” Well, that’s completely baloney! Of course you have assumption. You have an assumption about what you accept as data and what you don’t accept as data, right? Certain things that don’t conform to your metaphysics, whether it’s explicit or implicit, you simply reject this data. So of course everyone has a metaphysics. So I think it’s very difficult to do without a metaphysics. What is my fundamental piece of data? I have to make some assumption.
And you’re referring here to this open letter, right? By 120 of your colleagues who are saying that IIT—they even called it pseudoscience, right?
Yeah. There was a point. IIT is pseudoscience. And some of the reasons why—totally bogus—one of the reasons was that, well, because some of the implications are that the fetus is conscious, and that’s anti-abortion, and so therefore it’s pseudoscience.
Yeah, yeah, yeah. And that’s—
The other one because, oh, it’s funded by the daughter—I kid you not—it is funded by the daughter of a Republican donor, and so therefore it can be science. Because they have dark goals in mind, clearly—
But what really happens here, right? Because these are political arguments that are playing a role. What what happened? What do you think was the reason that that letter came out at this moment?
Well, probably just jealousy. You know, because IIT gets a lot of attention, and they feel their theories don’t. So it’s just human jealousy. But more relevant, I think—it’s more interesting, more instructive—is metaphysics. What came out of that that the metaphysical assumptions are just very different. That most people in today’s world, because of the prevalence of computers and AI everywhere, they believe that implicitly—some explicitly, but most implicitly—well, if I can simulate it, it is conscious. Clearly. That’s all they’re used to. And IIT says it can’t be simulated. Well, we know that’s wrong. Of course it can be simulated.
And yeah, I agree. You can simulate the behavior so as if it was consciousness. I’ve no trouble with that. But you can’t simulate consciousness. That’s a very different thing. It’s like as you said earlier: you can simulate the facts of gravity on, let’s say, black holes and stars, but that doesn’t mean you have the causal power of gravity. And so that came as a consequence of that. I think people realized: yeah, everyone has to be more clear, everyone has to have a certain metaphysical assumptions.
Yeah, and I agree. So then the best thing is for us to be aware of that, and to be open about it. Exactly. Yeah.
That’s correct. And so, you know, analytical idealism has a different metaphysics—fine. But ultimately we have to be somewhat skeptical, because the last 2,500 years of—at least Western intellectual history that I’m familiar with—have taught us it’s very easy to delude ourselves. And the more intelligent you are, the easier you can delude yourself. And there are untold examples, and we all know this from our own lives, right? It’s very easy to fool ourselves. We want to believe something and we desperately believe it.
So therefore, one of the tools we have invented over the last thousand years is science, and so therefore you have to be skeptical about, in particular, when something that you really like happens, right, before you just go out in a rush: “Oh, I said it all along. This exactly proves my point.” That’s fine. But then you have to go back and say, “Okay, let’s check this. Is really true? Is this really what it appears to be, or the other assumption? Can I repeat it? How reliable is it?” And so with these mind at large that I had, or the experience of near-death experience, of delocalized consciousness—well, is it reproducible? Well, it clearly seems to be, because lots of people report it.
Alright, but so we have to find some ways of making inroads there. You can say: why? It doesn’t seem to matter to the people. That’s true. Okay. Why? Because (A) there are many people who never had this experience, who perhaps, if you give them this experience safely, it would be wonderful for them. Right? But also, because if you are really open-minded and there’s this weird phenomena, you can’t just sort of say, “Okay, I’m going to ignore it because it violates my deeply held assumptions.” Well then, maybe I have to get rid of my assumptions, because they are not the best ones.
Yeah, you’re right. Because the endeavor some people in the psychedelic research have is to sort of see if you can get those beneficial effects that we see in depression, anxiety, you name it, without the psychedelic “weird” experience, right? And you even call that a fool’s quest. So you really think that going through the experience is needed to gain the effect that we see in this thing?
Yeah. So there’s some (what I would call) naïve physicalist who says, “Wow! Ultimately psychedelics is just a molecule,”—which is true—“and it goes to some receptors in the brain,”—which is true—“and then that binds to some other receptors, and then turns on this mechanism and turns off that mechanism. So I can explain all that I don’t need consciousness.” Yeah, so you don’t need consciousness.
I think that’s below me, and am trying to find out innovative ways of testing that. So, for example, one idea what happens if I take young people who are psychedelic experienced, I sleep deprive them, and on two nights one of them I put them on an IV drip, and one night I give them vehicle—just water, essentially; sugar. In the other night I give them two milligrams of psilocybin in IV. What happens then when they sleep? Can I focus on, for example, arranged circumstances that they don’t wake up, so they don’t have an experience now. Of course, I have to make sure that the dream hasn’t changed. Maybe their dreams now, maybe they really have weird dreams or something. So it’s an ongoing experiment to see: will they still get the benefits, the therapeutic benefit?
Or you can do it under anesthesia. People are trying to do that, where I anesthetize people and give them, for example, ketamine or some—ketamine’s being done at Stanford, psychedelics is being done at some other university—to see when you’re unconscious, and I give you the substance, it’s still the same molecule going to the same receptor, but now in an unconscious brain, will you still get the same therapeutic effect? So there are ways to test things. Is consciousness really necessary? Do you really need the conscious experience, the “weird experience,” in order to drive therapeutic benefit?
Very interesting. And we’ve talked a lot about the intrinsic causal power, right? Yeah. Thinking about the power of mind over matter, as people say it so—and I’m referring to, for instance, the placebo effect—what are your thoughts here, also from our IIT perspective?
Yeah, so the placebo effect. It’s funny when people always say, “Well, that’s not a real effect.” It’s very real. It’s very real. In fact, in SSRIs, the most common antidepressant medication, it’s probably 80% that’s what studies show, of the effect is the fact that you know on average people that are depressed like you’re depressed, given this drug, have some relief from their symptoms. That a person in a white coat that has M.D. after their name has given it to you, that it’s a restricted substance, that’s a little white pill. All of that leads you to the belief that it works. And this belief can reach all the way down into your guts, down into your immune system. So this belief, that’s the belief of your brain. And again, this shows that a conscious mind can have an influence on your body. You’re more likely to respond to it. You’re more likely to get a beneficial effect—that’s the placebo. This works for pain medication, it works for the immune system, et cetera. That’s why people do these random controlled trials, right? To exactly see how much—in both cases you’re given something, but in some case it’s just sugar, in the other case it’s real substance. How much difference do you get? So it’s a very real effect that shows us the mind can have powers down into the very throughout the wiring of the body. Not just here, but also in your gut, et cetera. So in principle IIT has no problem with that.
Can it give us a mechanism to better understand what happens?
With placebo? Well, I mean, we are we are finding this out now, so we know—because people do this first in mice, but now it’s in people—you can look at the vagus nerve extends into every organ. And it’s two-way communication. So there’s communication between your kidney and your brain, and the brain and your kidney. And the heart and the lungs and the viscera and the sex organs, all that. So there’s two-way communication. And so your expectations will partly shape the way—for better or worse. There’s also the nocebo, right? If you’ve just learned or read in the newspaper that something is going to hurt you, well, it will more likely hurt you than if you hadn’t had that. So it goes both ways. So your expectation can can influence. It’s really a measure of belief. Placebo tells us how much belief can—so belief can maybe not move mountains, but it can shift whether you respond to a drug whether you are less depressed or more depressed.
But we don’t yet have a mechanism here, right, sort of a neural correlate of belief. Sort of, you can point it out?
No, some people are studying it. You can block the placebo responses. It’s opiate, for example. So you can use opiate blockers to block the placebo effect. So we’re beginning to know something about that. So I don’t think it’s a magical, it’s a property of… when, you know, we have this idea that brain is isolated and just controls the brain, but there are two-way connections, and it’s why it’s really down to the roots of the body, and everything is wired up. And so I find if you think about, it’s not that surprising.
It’s funny that we have something so—I mean, it’s proven by science that the effectiveness of the placebo effect that we could… I mean, we could use it for our benefit, right? But it feels unethical, perhaps. What—
Well, in fact, today it’s very difficult to study, because these days when you do a study with humans, you have to tell them that you’re getting a placebo. So it makes it—
And then you’re already messing with the effect.
So it’s like a Heisenberg effect, yes. However, that being said, people still get the placebo, although it’s less strong.
This is also—are you… I did this video on the what is called the comforting delusion objection to the psychedelic in psychedelic research, right? That it’s sort of—Michael Pollan raised this point. That, because we know that people have these weird experiences, and that through metaphysical ontological shocks they experience less depression, but from a sort of physicalist–naturalist perspective we have a problem with that because we cannot sort of account for it, so should we… is it ethical to then give them this drug that we know works? It’s this weird dilemma that, sort of, at the basis has to do with metaphysics; that you have a problem with a different metaphysics, right? How’s that among your colleagues? I mean, do you see sort of more openness nowadays, than…?
Oh yeah, hugely. Everyone wants to do psychedelics. Yes. In fact, if you just you know sit in your airplane seat and you mention “I’m a neuroscientist working on psychedelics.” “Oh! That’s interesting. What do you do?”
Now we’re talking. Now we’re talking.
And you’ve got to remember: unlike in Holland, in our country it’s illegal.
Yeah. You really have to go to the jungle, yeah.
My first mushroom experience, I knew what it’s all about. I got to the beating heart of the universe. I finally understood it.
Yeah.
It remained ineffable, but I carry that still with me.
DMT Trip Report
But—that was your mushroom. But this DMT one went deeper.
5-MeO-DMT. Yeah, that’s a terrifying experience I’m never ever going to do again. Yeah, it felt like a near-death experience. So you lose—
Yeah. Also, I guess, because it’s so instant, right—it’s like in how many seconds?
Three breaths.
Unbelievable!
[Simulates three inhalations] …and then, the third one, my field of view starts breaking up into hexagonals, black, and I couldn’t take the fourth breath anymore because I was going down in this black hole.
Was there still a self to be terrified at that moment, or was—
At that moment my last thought was: “Holy shit, what have I done?” But I knew I had to give myself over. That’s the other thought. I knew I had to just let go, and I did.
Was there, then, bliss thereafter?
There was terror and bliss. It was this weird combination. It was just bright, you know, there was a bright light. It was sort of this singularity. And just, there was three things: bright light, no self, no Christof, no memories, no thought, no future, no past, no time, no space, no fear. There was just terror and ecstasy. And then the next thing, I put on Arvo Pärt – Spiegel im Spiegel.
Yeah.
The closing sound. The guide had put it on when she gave me the pipe to smoke, and then the first thing I heard was the closing. So I knew it was roughly nine minutes. But it was timeless. There’s no time. Passage of time comes to a stop. So it’s something extraordinary. And the other extraordinary thing: two hours later, you’re perfectly fine. I mean, I walked out of the house perfectly fine.
That’s so crazy. The non-toxicity of it, right? Your brain just knows what to do with this molecule. And…
It’s remarkable.
Well, what made you decide to dedicate a book on the experience. I mean, of course, your new book touches on a lot of your work, but it specifically starts—
Well, I open with that experience because it’s such—so (A) what it really brought home once again: the centrality of consciousness. Because at that moment there was nothing. There was no self. And there was no world. There was only consciousness. So it’s really: consciousness is absolute. It’s the only thing that really exists for itself. Not even self-consciousness, because there was no more self. There was no more Christof. So that really, number one, first and foremost: that the world—I don’t know whether the world exists. In that case there was no more world. Well if a point of unbearable lightness, you can, I guess, still say it is part of the world, but pretty minimal.
And so of course I was terrified. You know, I woke up, I totally cried. Because, you know, I had just lost everything.
Unbelievable. Yeah.
So it was good in a sense of—I discovered this a few weeks later, because up to then I had more, as I got older, I had these episodes where I lay awake in bed at night and I had death fear. You know, you just think about death, being dead forever and ever and ever. It’s terrifying, right? And I discovered a few weeks later: huh, I haven’t had one of these since then. And this was now four years ago, and never had it again.
That’s—yeah, you write about a calmness. A calmness that sort of has…. Yeah. And has it—
It’s a nice benefit.
Yeah. And do you think it has changed, or will it change your scientific career?
No, because at that point I was still operating: well yeah, it’s my brain, it shuts down—partially shuts down. Not totally, because otherwise I wouldn’t have been conscious. It’s a very interesting experience that has these benefits. But it’s not an experience I find impossible to explain. This mind at large experience I find more challenging to explain, because if it’s really true that somehow, if I wasn’t deluding myself or had an illusion, that I just maybe I had a weird experience that I think means mind at large, but maybe not. You know? Maybe it just means something more banal. So that’s more challenging. If it really was the case it was.
I think I’m still afraid, after having a—I think I have to go back once more in the psychedelic space. Sort of really go through it. I think I went—I had an ayahuasca experience where I felt like dying, but I opted out. There was something that rebooted in me that said…
You resisted, you mean?
Shit, I have a wife at home and she’s pregnant. I did it when my second child was coming.
Maybe not the best time to do it!
But it was! It was. It was. It was a good time. I had to do it. I had to do it at that time. But yeah, the self rebooted. I think I didn’t know what—and I wanted out, but then I was still very much under influence, and it became psychotic, and these time loops were really difficult. And then I got an “Oh my God!” But it is as if I had to die, but I didn’t. And I still think I need that experience, in a sense. Do you feel me here, or are you…?
I hear you very much, because—so, at Santo Daime, if you become a member of Santo Daime Church, you’re supposed to do this with some degree of regularity. And it hasn’t happened since then, partly because I try too hard. And the trick is, when I’m talking to many people reading about it, you have to let go. You really have to let go. And the first time it happened was when I had this trick. I just let go. And when you resist or you are afraid, you say, “No, I don’t want to do this now because this and the other.” Yeah. So you have to become—I mean, think about it. Gravity: we all experience gravity every day all the time. We are born into gravity, we will die in this gravitational well that’s planet Earth, right? And a very rare condition when you take one of these planes you can, you know, be weightless for a minute. Or when you become an astronaut, you know, you become weightless. But those are very rare. So it’s same with self. We always are within the confines of the gravitational field of the self, right? There’s always someone observing, and what does this mean for me, and what does it mean for my long-term well-being, et cetera, et cetera. And removing the self is very, very difficult. It is always observing. And so that’s the trick. You have to let go. To let it be and just be part of the experience. As Schopenhauer said: no more distinction between the apprehender and the apprehended, between the knower and the knowing.
The subject-and-object divide, the great divide as you mention it in your book.
Yes. And I think that’s something—when I talk to other people do this much longer, they can do this routinely.
They are experienced psychonauts, yeah.
Yes.
Christof, thanks so much for this conversation.
Thank you! It was a pleasure. I enjoy talking about these topics.
Yeah. Yeah, nice to hear that. And only—yeah, one more. I mean, you quote Tristan und Isolde and you speak German, right? So what’s the title of your book, and this sort of becoming one with the universe in German? Could you end with a Tristan und Isolde quote in German for us?
Selbst dann bin ich die Welt—which is, you know, “Then I am myself the world.” In the second act, when Tristan and Isolde, in this ecstatic love duet, they try to overcome. It’s pure Schopenhauer in some sense. They try to overcome their own identity. They want to be the other. They truly want to merge. They want to become this amalgamation, this one über-mind. And they sing: “No more Tristan. No more Isolde.” They become one. And it’s fantastic. I mean, one of the most glorious music in the Western canon.
It’s beautiful, yeah. Thank you.