Human Civilization and AI

September 7, 2018

Musk and Rogan discuss the existential risk of uncontrolled artificial intelligence. They explore possibilities for regulation and oversight, the potential for human-AI symbiosis through brain-computer interfaces, and the philosophical implications of advanced AI surpassing human intelligence.

Topics

The entire interview is available on YouTube.


00:01

Musk

We’ll not be able to hold a candle to AI.

00:04

Rogan

Hmm. You scare the shit out of me when you talk about AI. Between you and Sam Harris.

00:08

Musk

Oh, sure!

00:08

Rogan

I didn’t even consider it until I had a podcast with Sam once—

00:11

Musk

Sam’s great.

00:12

Rogan

—he made me shit my pants; talking about AI. I realized: oh, well this is a genie that, once it’s out of the bottle, you’re never getting it back in.

00:21

Musk

That’s true.

00:23

Rogan

There was a video that you tweeted about one of those Boston Dynamics robots, and you were like, “In the future it’ll be moving so fast you can’t see it without a strobe light.”

00:34

Musk

Yeah. You could probably do that right now.

00:37

Rogan

And no one’s really paying attention too much, other than people like you, or people that are really obsessed with technology. All these things are happening. These robots are—did you see the one where PETA put out a statement that you shouldn’t kick robots?

00:51

Musk

It’s probably not wise.

00:54

Rogan

For retribution.

00:55

Musk

Their memory is very good.

00:57

Rogan

I bet it’s really good.

00:58

Musk

It’s really good.

00:59

Rogan

I bet it is.

00:59

Musk

Yes.

01:00

Rogan

And getting better every day.

01:01

Musk

It’s really good.

01:02

Rogan

Are you honestly, legitimately concerned about this? Are you—is, like, AI one of your main worries in regards to the future?

01:15

Musk

Yes. It’s less of a worry than it used to be, mostly due to taking more of a fatalistic attitude.

01:24

Rogan

Hm. So you used to have more hope? And you gave up some of it and now you don’t worry as much about AI. You’re like: this is just what it is.

01:38

Musk

Yeah. Pretty much. Yes. Yes. It’s—no, it’s not necessarily bad, it’s just… it’s definitely going to be outside of human control.

01:50

Rogan

Not necessarily bad, right?

01:52

Musk

Yes. It’s not necessarily bad, it’s just outside of human control. The things that’s going to be tricky here is that it’s going to be very tempting to use AI as a weapon. It’s going to be very tempting. In fact, it will be used as a weapon—so, the onramp to serious AI. The danger is going to be more humans using it against each other, I think. Most likely. That’ll be the danger, yeah.

02:27

Rogan

How far do you think we are from something that can make its own mind up, whether or not something’s ethically or morally correct, or whether or not it wants to do something, or whether or not it wants to improve itself, or whether or not it wants to protect itself from people or from other AI? How far away are we from something that’s really, truly sentient?

02:51

Musk

Well, I mean, you could argue that any group of people, like a company, is essentially a cybernetic collective of people and machines. That’s what a company is. And then, there are different levels of complexity in the ways these companies are formed. And then there’s sort of a collective AI in Google Search, where we’re all sort of plugged in like nodes on the network; like leaves on a big tree. And we’re all feeding this network with our questions and answers. We’re all collectively programming the AI. And Google, plus all the humans that connect to it, are one giant cybernetic collective. This is also true of Facebook and Twitter and Instagram, and all these social networks. They’re giant cybernetic collectives.

04:05

Rogan

Humans and electronics all interfacing. And constantly, now. Constantly connected.

04:10

Musk

Yes. Constantly.

04:14

Rogan

One of the things that I’ve been thinking about a lot over the last few years is that the things that drives a lot of people crazy is how many people are obsessed with materialism, and getting the latest, greatest thing. And I wonder: how much of that is… well, a lot of it is most certainly fueling technology and innovation. And it almost seems like it’s built into us. It’s what we like and what we want. That we’re fueling this thing that’s constantly around us all the time. And it doesn’t seem possible that people are going to pump the brakes. It doesn’t seem possible, at this stage—when we’re constantly expecting the newest cell phone, the latest Tesla update, the newest Macbook Pro—everything has to be newer and better. And that’s going to lead to some incredible point. And it seems like it’s build into us. It almost seems like an instinct; that we’re working towards this, that we like it. That our job—just like the ants build the anthill—our job is to somehow fuel this.

05:16

Musk

Yes. I mean, I made those comments some years ago, but it feels like we are the biological bootloader for AI, effectively. We are building it. And then we’re building progressively greater intelligence. And the percentage of intelligence that is not human is increasing. And eventually, we will represent a very small percentage of intelligence.

05:49

But the AI isn’t formed, strangely, by the human limbic system. It is in large part our id writ large.

06:02

Rogan

How so?

06:03

Musk

Well, you mentioned all those things; the sort of primal drives. Those’re all things that we like and hate and fear. They’re all there on the Internet. They’re a projection of our limbic system.

06:27

Rogan

[Baffled silence, then laughter]

06:31

Musk

It’s true.

06:32

Rogan

No, it makes sense. In thinking of it as a… I mean, thinking of corporations, and just thinking of human beings communicating online through these social media networks as some sort of an organism that’s… it’s a cyborg. It’s a combination. It’s a combination of electronics and biology.

06:48

Musk

Yeah.

06:52

Rogan

Does this—

06:53

Musk

In some measure, it’s like the success of these online systems is sort of a function of how much limbic resonance they’re able to achieve with people. The more limbic resonance, the more engagement.

07:12

Rogan

Mmh. Whereas, like, one of the reasons why, probably, Instagram is more enticing than Twitter.

07:20

Musk

Limbic resonance.

07:21

Rogan

Yeah. You get more images, more video.

07:24

Musk

Yes.

07:25

Rogan

It’s tweaking your system more.

07:26

Musk

Yes.

07:27

Rogan

Do you worry—or wonder, in fact—about what the next step is? I mean, a lot of people didn’t see Twitter coming—you know, communicate with 140 characters (or 280, now)— [that it] would be a thing that people would be interested in. It’s going to excel, it’s going to become more connected to us, right?

07:48

Musk

Yes. Things are getting more and more connected. They’re, at this point, constrained by bandwidth. Our input-output is slow; particularly output. Output got worse with thumbs. You know, we used to have input with ten fingers, now we have thumbs. But images are also a way of communicating at high bandwidth. You take pictures and you send pictures to people. That sends—that communicates far more information than you can communicate with your thumbs.

08:17

Rogan

So what happened with you when you decided, or you took on, a more fatalistic attitude? Was there any specific thing, or was it just the inevitability of our future?

08:37

Musk

I tried to convince people to slow down: slow down AI, to regulate AI. This was futile. I tried for years. Nobody listened. Nobody Listened.

08:50

Rogan

This seems like a scene in a movie where the robots are going to fucking take over! You’re freaking me out! Nobody listened?

08:56

Musk

Nobody listened.

08:57

Rogan

No one? Are people more inclined to listen today? It seems like an issue that’s brought up more often over the last few years than it was maybe five, ten years ago. It seemed like science fiction.

09:09

Musk

Maybe they will. So far they haven’t. I think people don’t—like, normally the way that regulations work is very slow. Very slow indeed. So, usually there’ll be something, some new technology: it will cause damage or death. There will be an outcry. There will be investigation. Years will pass. There will be some sort of insight committee. There will be rule-making. Then there will be oversight. Eventually regulations. This all takes many years. This is the normal course of things if you look at, say, automotive regulations: how long did it take for seat belts to be implemented, to be required? You know, the auto industry fought seat belts, I think, for more than a decade—successfully fought any regulations on seat belts—even though the numbers were extremely obvious. If you had a seat belt on, you would be far less likely to die or be seriously injured. It was unequivocal. And the industry fought this for years, successfully. Eventually—after many, many people died—regulators insisted on seat belts. This time frame is relevant to AI. You can’t take ten years from the point at which it’s dangerous. It’s too late.

10:50

Rogan

And you feel like this is decades away or years away? From being too late? If you have this fatalistic attitude, and you feel like it’s going—we’re in a, almost like a doomsday countdown.

11:06

Musk

It’s not necessarily a doomsday countdown. It’s a—people can be—

11:10

Rogan

Out of control countdown.

11:11

Musk

Out of control, yeah. People call it the Singularity, and that’s probably a good way of thinking about it. It’s a singular—it’s hard to predict, like a black hole: what happens past the event horizon?

11:22

Rogan

Right. So once it’s implemented, it’s very difficult—because it would be able to—

11:26

Musk

Once the genie’s out of the bottle, what’s going to happen?

11:28

Rogan

Right. And it will be able to improve itself.

11:31

Musk

Prob—yes.

11:33

Rogan

That’s where it gets spooky, right? The idea that it can do thousands of years of innovation very, very quickly?

11:39

Musk

Yeah.

11:41

Rogan

And then will be just ridiculous.

11:43

Musk

Ridiculous.

11:44

Rogan

We will be like this ridiculous biological, shitting, pissing thing, trying to stop the gods. “No! Stop! We like living with a finite life span and watching Norman Rockwell paintings.”

11:58

Musk

It could be terrible, and it could be great. It’s not clear.

12:02

Rogan

Right.

12:03

Musk

But one thing’s for sure: we will not control it.

11:07

Rogan

Do you think that it’s likely that we will merge—somehow or another—with this sort of technology, and it’ll augment what we are now? Or do you think it will replace us?

12:22

Musk

Well, that’s the scenario—the merge scenario with AI is the one that seems probably the best. Like if—

12:32

Rogan

For us?

12:32

Musk

Yes. Like, if you can’t beat it, join it. That’s—

12:38

Rogan

Yeah, that’s… yeah.

12:40

Musk

You know? So, from a long-term existential standpoint, that’s the purpose of Neuralink: is to create a high-bandwidth interface to the brain such that we can be symbiotic with AI. Because we have a bandwidth problem: you just can’t communicate through your fingers, it’s too slow.

13:07

Rogan

And where’s Neuralink at right now?

13:12

Musk

I think we’ll have something interesting to announce in a few months that’s at least an order of magnitude better than anything else. Probably—I think—better than probably anyone thinks is possible.

13:22

Rogan

How much can you talk about that right now?

13:24

Musk

I don’t want to jump the gun on that.

13:27

Rogan

But what’s the ultimate—what’s the idea behind it? What are you trying to accomplish with it? What would you like—best case scenario?

13:35

Musk

I think, best-case scenario: we effectively merge with AI, where AI serves as a tertiary cognition layer. Where: we’ve got the limbic system (primitive brain, essentially), you’ve got the cortex—so, your cortex and limbic system are in a symbiotic relationship. And generally, people like their cortex and they like their limbic system. I haven’t met anyone who wants to delete their limbic system or delete their cortex. Everybody seems to like both. And the cortex is mostly in service to the limbic system. People may think that the thinking part of themselves is in charge, but it’s mostly their limbic system that’s in charge. And the cortex is trying to make the limbic system happy. That’s what most of that computing power is oriented towards: how can I make the limbic system happy? That’s what it’s trying to do.

14:38

Now, if we do have a third layer—which is the AI extension of yourself—that is also symbiotic. And there’s enough bandwidth between the cortex and the AI extension of yourself such that the AI doesn’t de facto separate. Then that could be a good outcome. That could be quite a positive outcome for the future.

15:02

Rogan

So instead of replacing us, it will radically change our capabilities?

15:07

Musk

Yes. It will enable anyone who wants to have superhuman cognition. Anyone who wants. This is not a matter of earning power because your earning power would be vastly greater after you do it. So, it’s just like: anyone who wants can just do it. In theory. That’s the theory. And if that’s the case, then—and let’s say billions of people do it—then the outcome for humanity will be the sum of human will. The sum of billions of people’s desire for the future. And that could—

15:55

Rogan

But billions of people with enhanced cognitive ability? Radically enhanced.

15:58

Musk

Yes. Yes.

16:00

Rogan

And which would be—how much different than people today? Like, if you had to explain it to a person who didn’t really understand what you were saying. How much different are you talking about? When you say “radically improved,” what do you mean? You mean mind reading, do you mean—

16:19

Musk

It would be difficult to really appreciate the difference. You know, it’s kind of like how much smarter are you with a phone or computer than without? You’re vastly smarter, actually. You know, you can answer any question. If you’re connected to the Internet, you can answer any question pretty much instantly, any calculation. Your phone’s memory is essentially perfect. You can remember flawlessly—or your phone can remember: videos, pictures, everything. Perfectly. That’s—your phone is already an extension of you. You’re already a cyborg. Most people don’t even realize they are already a cyborg. That phone is an extension of yourself. It’s just that the data rate—the communication rate between you and the cybernetic extension of yourself that is your phone and computer—is slow. It’s very slow. And that… it’s like a tiny straw of information flow between your biological self and your digital self. And we need to make that tiny straw like a giant river: a huge, high-bandwidth interface. It’s an interface problem, data rate problem. Solve the data rate problem, then I think we can hang on to human–machine symbiosis through the long term. And then people may decide that they want to retain their biological self or not. I think they’ll probably choose to retain their biological self.

18:06

Rogan

Versus some sort of Ray Kurzweil scenario where they download themselves into a computer?

18:11

Musk

You will be essentially snapshotted into a computer at any time. If your biological self dies, you could probably just upload into a new unit. Literally.

18:21

Rogan

Pass that whiskey. We’re getting crazy over here. This is getting ridiculous.

18:26

Musk

Down the rabbit hole!

18:27

Rogan

Grab that sucker. Give me some of that. This is too freaky. See, if I was—

18:32

Musk

I’ve been thinking about this for a long time, by the way.

18:34

Rogan

I believe you have. If I was talking to one of my—cheers, by the way!

18:37

Musk

Cheers! Yeah, this is great whiskey.

18:39

Rogan

Thank you. Wonder where this came from. Who brought this to us? Somebody gave it to us. “Old Camp.” Whoever it was, thanks!

18:46

Musk

It’s good.

18:47

Rogan

Yeah, it is good.

This is just inevitable. Again, going back to your—when you decided to have this fatalistic viewpoint: so, you tried to warn people. You talked about this pretty extensively and I’ve read several interviews where you talked about this. And then you just sort of said, “Okay, it just is.” Let’s just—and, in a way, by communicating the potential… for sure, you’re getting the warning out to some people.

19:14

Musk

Yeah. Yeah, I mean, I was really going on the warning quite a lot. I was warning everyone I could. Even met with Obama, and just for one reason: better watch out.

19:31

Rogan

Just to talk about AI?

19:32

Musk

Yes.

19:33

Rogan

And what did he say? He said, “What about Hillary?” Worry about her first. (Shh, everybody be quiet!)

19:39

Musk

No, he listened. He certainly listened. I met with congress. I met with—I was at a meeting of all fifty governors and talked about AI danger. And I talked to everyone I could. No one seemed to realize where this was going.

20:04

Rogan

Is it that, or do they just assume that someone smarter than them is already taking care of it? Because when people hear about something like AI, it’s almost abstract. It’s almost like it’s so hard to wrap your head around it…

20:17

Musk

It is.

20:17

Rogan

…by the time it already happens it’ll be too late.

20:21

Musk

Yeah, I think they didn’t quite understand it, or didn’t think it was near-term, or not sure what to do about it. An obvious thing to do is to just establish a committee—government committee—to gain insight. You know, before your oversight, before you do make regulation, you try to understand what’s going on. And then, if you have an insight committee, then once they learn what’s going on, get up to speed, then they can make maybe some rules, or propose some rules. And that would be probably a safer way to go about things.

21:05

Rogan

It seems—I mean, I know that it’s probably something that the government’s supposed to handle—but it seems like I don’t want the government to handle this.

21:14

Musk

Who do you want to handle it?

21:15

Rogan

I want you to handle it.

21:16

Musk

Oh jeez.

21:16

Rogan

Yeah! I feel like you’re the one who could ring the bell better. Because if Mike Pence starts talking about AI, I’m like: “Shut up, bitch. You don’t know anything about AI. Come on, man.” He doesn’t know what he’s talking about. He thinks it’s demons.

21:27

Musk

Yeah, but I don’t have the power to regulate other companies. What am I supposed to…?

21:31

Rogan

Right. But maybe companies could agree? Maybe there could be some sort of a—I mean, we have agreements where you’re not supposed to dump toxic waste into the ocean, you’re not supposed to do certain things that could be terribly damaging even though they’d be profitable. Maybe this is one of those things. Maybe we should realize that you can’t hit the switch on something that’s going to be able to think for itself and make up its own mind as to whether or not it wants to survive or not, and whether or not it thinks you’re a threat. And whether or not it thinks you’re useless. Like: “Why do I keep this dumb, finite lifeform alive? Why keep this thing around? It’s just stupid. It just keeps polluting everything. It’s shitting everywhere it goes. Lighting everything on fire and shooting each other. Why would I keep this stupid thing alive?” Because sometimes it makes good music? You know? Sometimes it makes great movies. Sometimes it makes beautiful art. And sometimes, you know—it’s cool to hang out with.

22:23

Musk

Yes, all those reasons.

22:24

Rogan

Yeah. For us those are great reasons.

22:26

Musk

Yes.

22:26

Rogan

But for anything objective, standing outside, it would go: “Oh, this is definitely a flawed system.”

Elon Musk and Joe Rogan

https://www.organism.earth/library/docs/elon-musk/headshot-square.webp

An image of the subject.

×
Document Options
Find out more