The entire interview is available on YouTube.
We’ll not be able to hold a candle to AI.
Hmm. You scare the shit out of me when you talk about AI. Between you and Sam Harris.
Oh, sure!
I didn’t even consider it until I had a podcast with Sam once—
Sam’s great.
—he made me shit my pants; talking about AI. I realized: oh, well this is a genie that, once it’s out of the bottle, you’re never getting it back in.
That’s true.
There was a video that you tweeted about one of those Boston Dynamics robots, and you were like, “In the future it’ll be moving so fast you can’t see it without a strobe light.”
Yeah. You could probably do that right now.
And no one’s really paying attention too much, other than people like you, or people that are really obsessed with technology. All these things are happening. These robots are—did you see the one where PETA put out a statement that you shouldn’t kick robots?
It’s probably not wise.
For retribution.
Their memory is very good.
I bet it’s really good.
It’s really good.
I bet it is.
Yes.
And getting better every day.
It’s really good.
Are you honestly, legitimately concerned about this? Are you—is, like, AI one of your main worries in regards to the future?
Yes. It’s less of a worry than it used to be, mostly due to taking more of a fatalistic attitude.
Hm. So you used to have more hope? And you gave up some of it and now you don’t worry as much about AI. You’re like: this is just what it is.
Yeah. Pretty much. Yes. Yes. It’s—no, it’s not necessarily bad, it’s just… it’s definitely going to be outside of human control.
Not necessarily bad, right?
Yes. It’s not necessarily bad, it’s just outside of human control. The things that’s going to be tricky here is that it’s going to be very tempting to use AI as a weapon. It’s going to be very tempting. In fact, it will be used as a weapon—so, the onramp to serious AI. The danger is going to be more humans using it against each other, I think. Most likely. That’ll be the danger, yeah.
How far do you think we are from something that can make its own mind up, whether or not something’s ethically or morally correct, or whether or not it wants to do something, or whether or not it wants to improve itself, or whether or not it wants to protect itself from people or from other AI? How far away are we from something that’s really, truly sentient?
Well, I mean, you could argue that any group of people, like a company, is essentially a cybernetic collective of people and machines. That’s what a company is. And then, there are different levels of complexity in the ways these companies are formed. And then there’s sort of a collective AI in Google Search, where we’re all sort of plugged in like nodes on the network; like leaves on a big tree. And we’re all feeding this network with our questions and answers. We’re all collectively programming the AI. And Google, plus all the humans that connect to it, are one giant cybernetic collective. This is also true of Facebook and Twitter and Instagram, and all these social networks. They’re giant cybernetic collectives.
Humans and electronics all interfacing. And constantly, now. Constantly connected.
Yes. Constantly.
One of the things that I’ve been thinking about a lot over the last few years is that the things that drives a lot of people crazy is how many people are obsessed with materialism, and getting the latest, greatest thing. And I wonder: how much of that is… well, a lot of it is most certainly fueling technology and innovation. And it almost seems like it’s built into us. It’s what we like and what we want. That we’re fueling this thing that’s constantly around us all the time. And it doesn’t seem possible that people are going to pump the brakes. It doesn’t seem possible, at this stage—when we’re constantly expecting the newest cell phone, the latest Tesla update, the newest Macbook Pro—everything has to be newer and better. And that’s going to lead to some incredible point. And it seems like it’s build into us. It almost seems like an instinct; that we’re working towards this, that we like it. That our job—just like the ants build the anthill—our job is to somehow fuel this.
Yes. I mean, I made those comments some years ago, but it feels like we are the biological bootloader for AI, effectively. We are building it. And then we’re building progressively greater intelligence. And the percentage of intelligence that is not human is increasing. And eventually, we will represent a very small percentage of intelligence.
But the AI isn’t formed, strangely, by the human limbic system. It is in large part our id writ large.
How so?
Well, you mentioned all those things; the sort of primal drives. Those’re all things that we like and hate and fear. They’re all there on the Internet. They’re a projection of our limbic system.
[Baffled silence, then laughter]
It’s true.
No, it makes sense. In thinking of it as a… I mean, thinking of corporations, and just thinking of human beings communicating online through these social media networks as some sort of an organism that’s… it’s a cyborg. It’s a combination. It’s a combination of electronics and biology.
Yeah.
Does this—
In some measure, it’s like the success of these online systems is sort of a function of how much limbic resonance they’re able to achieve with people. The more limbic resonance, the more engagement.
Mmh. Whereas, like, one of the reasons why, probably, Instagram is more enticing than Twitter.
Limbic resonance.
Yeah. You get more images, more video.
Yes.
It’s tweaking your system more.
Yes.
Do you worry—or wonder, in fact—about what the next step is? I mean, a lot of people didn’t see Twitter coming—you know, communicate with 140 characters (or 280, now)— [that it] would be a thing that people would be interested in. It’s going to excel, it’s going to become more connected to us, right?
Yes. Things are getting more and more connected. They’re, at this point, constrained by bandwidth. Our input-output is slow; particularly output. Output got worse with thumbs. You know, we used to have input with ten fingers, now we have thumbs. But images are also a way of communicating at high bandwidth. You take pictures and you send pictures to people. That sends—that communicates far more information than you can communicate with your thumbs.
So what happened with you when you decided, or you took on, a more fatalistic attitude? Was there any specific thing, or was it just the inevitability of our future?
I tried to convince people to slow down: slow down AI, to regulate AI. This was futile. I tried for years. Nobody listened. Nobody Listened.
This seems like a scene in a movie where the robots are going to fucking take over! You’re freaking me out! Nobody listened?
Nobody listened.
No one? Are people more inclined to listen today? It seems like an issue that’s brought up more often over the last few years than it was maybe five, ten years ago. It seemed like science fiction.
Maybe they will. So far they haven’t. I think people don’t—like, normally the way that regulations work is very slow. Very slow indeed. So, usually there’ll be something, some new technology: it will cause damage or death. There will be an outcry. There will be investigation. Years will pass. There will be some sort of insight committee. There will be rule-making. Then there will be oversight. Eventually regulations. This all takes many years. This is the normal course of things if you look at, say, automotive regulations: how long did it take for seat belts to be implemented, to be required? You know, the auto industry fought seat belts, I think, for more than a decade—successfully fought any regulations on seat belts—even though the numbers were extremely obvious. If you had a seat belt on, you would be far less likely to die or be seriously injured. It was unequivocal. And the industry fought this for years, successfully. Eventually—after many, many people died—regulators insisted on seat belts. This time frame is relevant to AI. You can’t take ten years from the point at which it’s dangerous. It’s too late.
And you feel like this is decades away or years away? From being too late? If you have this fatalistic attitude, and you feel like it’s going—we’re in a, almost like a doomsday countdown.
It’s not necessarily a doomsday countdown. It’s a—people can be—
Out of control countdown.
Out of control, yeah. People call it the Singularity, and that’s probably a good way of thinking about it. It’s a singular—it’s hard to predict, like a black hole: what happens past the event horizon?
Right. So once it’s implemented, it’s very difficult—because it would be able to—
Once the genie’s out of the bottle, what’s going to happen?
Right. And it will be able to improve itself.
Prob—yes.
That’s where it gets spooky, right? The idea that it can do thousands of years of innovation very, very quickly?
Yeah.
And then will be just ridiculous.
Ridiculous.
We will be like this ridiculous biological, shitting, pissing thing, trying to stop the gods. “No! Stop! We like living with a finite life span and watching Norman Rockwell paintings.”
It could be terrible, and it could be great. It’s not clear.
Right.
But one thing’s for sure: we will not control it.
Do you think that it’s likely that we will merge—somehow or another—with this sort of technology, and it’ll augment what we are now? Or do you think it will replace us?
Well, that’s the scenario—the merge scenario with AI is the one that seems probably the best. Like if—
For us?
Yes. Like, if you can’t beat it, join it. That’s—
Yeah, that’s… yeah.
You know? So, from a long-term existential standpoint, that’s the purpose of Neuralink: is to create a high-bandwidth interface to the brain such that we can be symbiotic with AI. Because we have a bandwidth problem: you just can’t communicate through your fingers, it’s too slow.
And where’s Neuralink at right now?
I think we’ll have something interesting to announce in a few months that’s at least an order of magnitude better than anything else. Probably—I think—better than probably anyone thinks is possible.
How much can you talk about that right now?
I don’t want to jump the gun on that.
But what’s the ultimate—what’s the idea behind it? What are you trying to accomplish with it? What would you like—best case scenario?
I think, best-case scenario: we effectively merge with AI, where AI serves as a tertiary cognition layer. Where: we’ve got the limbic system (primitive brain, essentially), you’ve got the cortex—so, your cortex and limbic system are in a symbiotic relationship. And generally, people like their cortex and they like their limbic system. I haven’t met anyone who wants to delete their limbic system or delete their cortex. Everybody seems to like both. And the cortex is mostly in service to the limbic system. People may think that the thinking part of themselves is in charge, but it’s mostly their limbic system that’s in charge. And the cortex is trying to make the limbic system happy. That’s what most of that computing power is oriented towards: how can I make the limbic system happy? That’s what it’s trying to do.
Now, if we do have a third layer—which is the AI extension of yourself—that is also symbiotic. And there’s enough bandwidth between the cortex and the AI extension of yourself such that the AI doesn’t de facto separate. Then that could be a good outcome. That could be quite a positive outcome for the future.
So instead of replacing us, it will radically change our capabilities?
Yes. It will enable anyone who wants to have superhuman cognition. Anyone who wants. This is not a matter of earning power because your earning power would be vastly greater after you do it. So, it’s just like: anyone who wants can just do it. In theory. That’s the theory. And if that’s the case, then—and let’s say billions of people do it—then the outcome for humanity will be the sum of human will. The sum of billions of people’s desire for the future. And that could—
But billions of people with enhanced cognitive ability? Radically enhanced.
Yes. Yes.
And which would be—how much different than people today? Like, if you had to explain it to a person who didn’t really understand what you were saying. How much different are you talking about? When you say “radically improved,” what do you mean? You mean mind reading, do you mean—
It would be difficult to really appreciate the difference. You know, it’s kind of like how much smarter are you with a phone or computer than without? You’re vastly smarter, actually. You know, you can answer any question. If you’re connected to the Internet, you can answer any question pretty much instantly, any calculation. Your phone’s memory is essentially perfect. You can remember flawlessly—or your phone can remember: videos, pictures, everything. Perfectly. That’s—your phone is already an extension of you. You’re already a cyborg. Most people don’t even realize they are already a cyborg. That phone is an extension of yourself. It’s just that the data rate—the communication rate between you and the cybernetic extension of yourself that is your phone and computer—is slow. It’s very slow. And that… it’s like a tiny straw of information flow between your biological self and your digital self. And we need to make that tiny straw like a giant river: a huge, high-bandwidth interface. It’s an interface problem, data rate problem. Solve the data rate problem, then I think we can hang on to human–machine symbiosis through the long term. And then people may decide that they want to retain their biological self or not. I think they’ll probably choose to retain their biological self.
Versus some sort of Ray Kurzweil scenario where they download themselves into a computer?
You will be essentially snapshotted into a computer at any time. If your biological self dies, you could probably just upload into a new unit. Literally.
Pass that whiskey. We’re getting crazy over here. This is getting ridiculous.
Grab that sucker. Give me some of that. This is too freaky. See, if I was—
I’ve been thinking about this for a long time, by the way.
I believe you have. If I was talking to one of my—cheers, by the way!
Cheers! Yeah, this is great whiskey.
Thank you. Wonder where this came from. Who brought this to us? Somebody gave it to us. “Old Camp.” Whoever it was, thanks!
It’s good.
Yeah, it is good.
This is just inevitable. Again, going back to your—when you decided to have this fatalistic viewpoint: so, you tried to warn people. You talked about this pretty extensively and I’ve read several interviews where you talked about this. And then you just sort of said, “Okay, it just is.” Let’s just—and, in a way, by communicating the potential… for sure, you’re getting the warning out to some people.
Yeah. Yeah, I mean, I was really going on the warning quite a lot. I was warning everyone I could. Even met with Obama, and just for one reason: better watch out.
Just to talk about AI?
Yes.
And what did he say? He said, “What about Hillary?” Worry about her first. (Shh, everybody be quiet!)
No, he listened. He certainly listened. I met with congress. I met with—I was at a meeting of all fifty governors and talked about AI danger. And I talked to everyone I could. No one seemed to realize where this was going.
Is it that, or do they just assume that someone smarter than them is already taking care of it? Because when people hear about something like AI, it’s almost abstract. It’s almost like it’s so hard to wrap your head around it…
It is.
…by the time it already happens it’ll be too late.
Yeah, I think they didn’t quite understand it, or didn’t think it was near-term, or not sure what to do about it. An obvious thing to do is to just establish a committee—government committee—to gain insight. You know, before your oversight, before you do make regulation, you try to understand what’s going on. And then, if you have an insight committee, then once they learn what’s going on, get up to speed, then they can make maybe some rules, or propose some rules. And that would be probably a safer way to go about things.
It seems—I mean, I know that it’s probably something that the government’s supposed to handle—but it seems like I don’t want the government to handle this.
Who do you want to handle it?
I want you to handle it.
Oh jeez.
Yeah! I feel like you’re the one who could ring the bell better. Because if Mike Pence starts talking about AI, I’m like: “Shut up, bitch. You don’t know anything about AI. Come on, man.” He doesn’t know what he’s talking about. He thinks it’s demons.
Yeah, but I don’t have the power to regulate other companies. What am I supposed to…?
Right. But maybe companies could agree? Maybe there could be some sort of a—I mean, we have agreements where you’re not supposed to dump toxic waste into the ocean, you’re not supposed to do certain things that could be terribly damaging even though they’d be profitable. Maybe this is one of those things. Maybe we should realize that you can’t hit the switch on something that’s going to be able to think for itself and make up its own mind as to whether or not it wants to survive or not, and whether or not it thinks you’re a threat. And whether or not it thinks you’re useless. Like: “Why do I keep this dumb, finite lifeform alive? Why keep this thing around? It’s just stupid. It just keeps polluting everything. It’s shitting everywhere it goes. Lighting everything on fire and shooting each other. Why would I keep this stupid thing alive?” Because sometimes it makes good music? You know? Sometimes it makes great movies. Sometimes it makes beautiful art. And sometimes, you know—it’s cool to hang out with.
Yes, all those reasons.
Yeah. For us those are great reasons.
Yes.
But for anything objective, standing outside, it would go: “Oh, this is definitely a flawed system.”