Of course, his closing comments belied his opening comments that his was a minority position. In fact, a lot of people are quite apprehensive about the things that are coming up in the future. And I think largely it’s a fear of change that motivates that. That basically, we know what we were and we know how to live that way. We don’t know what we’ll become because of these things.
As to the specific dangers of individual terrorist use of these technologies, I’m sure Alfred Nobel had the same concern when he invented dynamite. And, in fact, as Ted Kaczynski shows, people can use technologies to do ridiculous things. But, on the other hand, the same technologies provide the means to basically build immune systems against these things. We do have dynamite completely widespread. Essentially it does self-reproduce because the knowledge of how to make it is very widespread and can be passed from person to person through books, et cetera. So it’s self-sustaining, and certainly anybody can make a fertilizer bomb.
So we do have a number of isolated incidents, but then we have mechanisms that evolve in the social organization to counter them—exactly as an immune system evolved in biological organisms once pathogens evolved to eat them. When the first self-reproducing organisms appeared, it’s without doubt that they didn’t have an immune system. They just grew and multiplied using the raw materials that were available. And then versions of them found that one of the best sources of raw materials were other things just like themselves, and began to eat them. And the next step—this happens in certain computer evolution situations as well.
But anyway, so if we have a problem with genetically engineered diseases—and we don’t really know whether it’s as easy as that. In fact, the natural world is quite inventive and tries almost everything. And it’s true that in the long run pathogens kind of get milder, because it really doesn’t pay them to kill off their hosts. But in the short run that’s not true. A pathogen can just mutate and be incredibly virulent, and it happens all the time. So there are things coming out of the swine fields and out of the jungles, killing lots of people. There are some extremely serious viral diseases around in the world. We have ways of dealing with them now that, in fact, did not exist at the time of the plagues. And we have an understanding because of the knowledge that’s been accumulated.
When genetic diseases are created—and frankly, at the moment it’s most likely that they’ll be created in some military laboratory in some relatively well-funded environment—the threat will appear. There may be considerable damage. The world will mobilize against it, if it feels itself threatened. And the resources of a thousand times as many people as created the disease in the first place will be mobilized to find ways against it. I can hypothesize all kinds of alternatives, including basically—I mean, nanotechnology can cancel out biotechnology, for instance. And robotics can evade both.
So the bigger picture, basically, is that, yes, we’re changing. And I quote the space travel pioneer from the beginning of the twentieth century, Konstantin Tsiolkovsky: “Earth is the cradle of mankind, but we can’t live in the cradle forever.” And he was talking in terms of simply moving out. Now we have more possibilities. Basically, the number of possibilities open to us is increasing all the time. And we have to sort through these possibilities to decide what we’re going to do with them and how we’re going to apply them to ourselves. And some of the sorting, of course, isn’t going to be done just by making rational decisions in advance. It’s going to be done by trial and error process.
And indeed, I think, because of issues that Bill Joy has mentioned—that these technologies don’t require special materials, they can be done on a small scale—the possibility of stopping them is essentially zero. The incentive, in fact, to keep developing them is tremendous, because they provide enormous incremental advantages. So just making one process a tiny bit better in a competitive environment means that one company grows at the expense of another company. And there was a situation that was very evident like this in the late 1980s, when Japan especially was doing so well relative to the older industries in the United States. And there was a slogan that was invented that described basically what American companies had to do, and it was: “Either automate, emigrate, or evaporate.” And that was pretty much the case. The old-style industries just went away. You just couldn’t afford to produce things paying large numbers of people to do things that there were better ways, more inexpensive ways to do it by automation. I’m sure the panel will come up with lots more interesting arguments on that line.
So I just would like to quickly go through what I was going to talk about, which is the robotics path. Because I think at the moment it’s the underdog in these technologies, and indeed biotechnology is on the verge of big-time, and nanotechnology—well, in small ways, it’s already doing a lot. And Ray, essentially Ray has said everything I probably would have said, except that his his favorite scenario for the development of machine intelligence is different than than mine. He he looks to reverse-engineering the brain, actually studying it and perhaps building neural simulations of variations of it, or even even doing full downloads. Of course, full downloads are one of the possible ways of evading biotechnological problems: basically just become non-biological.
So there’s been a fantasy since at least the 1920s of building machines that can act as mechanical servants, basically: that do work around the house. And in fact, there’s a great need for that kind of thing. Because most of us don’t have don’t have time to do the the basic physical maintenance activities. The computer explosion has begun to automate almost all of our informational tasks. And that’s great. But if you think about it, those are really just the the paperwork, which we consider just kind of peripheral to to the main stuff we did. And today we still have—now, of course, by automating the paperwork and making it better we can make more efficient use of the physical work that does go on, so we don’t do things redundantly and we just streamline the process.
But doing simple things, like cleaning and bringing things from place to place, and maybe small repairs, it seems so easy. It’s something that we don’t pay people very much to do. And there’s been much interest in building machines to do it. And it’s eluded us. It’s eluded us in the twenties, when there were electronic robots built, but they simply were not smart enough to do almost anything. By the sixties, there were some machines that used rather simple control strategies. Well, actually, I don’t want to give too many slides because that slows it way down. So, by the sixties we had machines that could transport things from place to place in factories, for instance, following their signals from buried wires. The circuitry in these was very simple: just a couple of tubes basically running a little servo loop. And by the eighties, when microprocessors became available to put into robots—before then, computers were much too expensive to consider in an actual practical application that substituted basically for one person—somewhat fancier methods were developed that use navigational markers mounted on the walls, or special programming for specific routes, but they required a specialist to install these robots. And, in fact, the market has been extremely anemic for those kinds of machines.
At the same time, there was research going on, some of that at a little agricultural college on the West Coast. This was a research institute related to it called called SRI. And at Stanford Junior University itself, there was this thing called the Stanford cart. This building used to be up on Arastradero Road. It’s a horse farm now. And the Stanford cart was able to, in 1979, cross a cluttered room, mapping about thirty things and taking about five hours to cross thirty meters. And that was the state of the art. Now we have a thousand MIPS, and it’s becoming possible to build dense three-dimensional maps of the surroundings, containing basically millions of points, at least hundreds of thousands. An intermediate step, which I won’t show you, that builds two-dimensional maps is already controlling a lot of research robots that are running down hallways with a meantime between failure that is getting lost or stuck of about a day, which is not good enough for practical use, but is on the way. With three-dimensional maps, the estimate is that the mean time between failure is in the months, and that’s probably good enough for a first range of products.
So the expectation is that, within the next five years, three-dimensional mapping will show up on industrial vehicles, guided vehicles, that can go from place to place and can make those vehicles smart enough that new routes can be given to them by ordinary factory workers, which means that there should be a lot more of these in use—in the hundreds of thousands is the estimate. Cleaning machines, there was an attempt to do these in the eighties, but automate these, make them robotic—but it simply doesn’t pay with a cleaning machine, it was discovered. To pay a specialist to map each and every room the machine has to do. And whenever a new area has to be cleaned, you have to call in the specialist again. Much, much more economical to pay minimum wage to somebody to push it through those areas.
Another factory vehicle. This will create a security robot that patrols warehouses. There’s a couple of hundred of these in use. Again, extremely anemic market at the moment. And the problem is that these earlier generation machines take a specialist to install and can be used only in very limited places, where things don’t change much. By making them smart, they can go anywhere. You lead them through the route that they’re supposed to do. They memorize the three-dimensional surroundings, and then locate themselves relative to it.
So anyway, the plan continues. The industrial market grows modestly—up to hundreds of thousands, maybe a few millions—and leads ultimately to enough credibility for the technology. Before 2010, I’m planning this kind of thing, which is a robot vacuum cleaner that’s smart enough to basically be brought home, taken out of its box, turned on, and its first instinct will be to explore your house and build a map. And after that, 99% of the time it can simply be left alone and it will figure out when are the appropriate times to run when there’s nobody home. And, of course, as it runs, it fills up with dust. It doesn’t move furniture, but it’s low enough to get under it. It fills up with dust and its batteries run down, so most of the time we’ll fix that at a docking station where it both recharges and regurgitates. And this is the dustbin that gets cleaned about once a month, say, by a human at this stage.
[???]
No, not this one, but the later ones. I’ll get to those just before I finish here. Here it is. I’m sorry, these are so dark. Here it is edging. The wheels are individually steerable. This is just a fantasy design, but it sort of uses the best ideas we had at the time. By the way, these little buttons that you see are actually cameras. It uses stereoscopic vision to build its three-dimensional maps. Those are the techniques that we’ve been using now. And with a thousand MIPS of computing, that can be done about once a second. So that’s the bare minimum. It’s a thousand MIPS that has made these things possible now—a thousand MIPS, and probably a couple of hundred megabytes, which is an amount of computation that simply was out of reach in all of the eighties, and in fact most of the nineties.
[???]
Yes, yes. In fact, I worked hard to try to find an animal that was just the right size based on this comparison between nervous systems and brains, which depends on the level of emulation. If Ray wants to do it at the level of neurons, you get a larger number. I like to do it—because that’s the way we’ve done it—at the level of things like edge operations, where you have a few hundred neurons being replaced by the most efficient code that can do that job. And at that level, a thousand MIPS is about equivalent to a guppy, a small fish. And sometimes it’s out in the field and it’s kind of far to go back to its docking station. So there’s no reason why it shouldn’t recharge if it’s not yet full.
So this leads in this scenario to, of course, a large mass market for things like this, these little gadgets. But also the perception that more is possible. So the vacuum cleaners are followed by a series of utility robots of greater capability, probably larger, with manipulator arms that can pick up and retrieve things, put them away, scrub horizontal surfaces, eventually windows, toilets, that sort of thing. Very practical, down-to-earth things. Maybe they’re probably outdoor applications for these first utility robots. But eventually, because there’s a large industry, basically, there is expected to be a large amount of incremental development. This incremental development, I think, will roughly parallel the evolution of various characteristics in invertebrates in our own lineage. So by maybe as soon as 2010, these utility robots evolve into something that has a broad enough set of uses; each use being determined by an application program. So basically you load this thing with an application program to do some simple chore, and there’s a lot of application programs available, it’s a universal robot—first generation.
[???]
Well, the idea is that—this, again, is a fantasy design at this point. But the idea is that, okay, we’re almost there, that the shaft is a bus that is both a mechanical bus for supporting various peripherals. Here’s a small arm, a large arm, and a sensor head. And the bus is, by the way, also electrical and sends data to these things. The power and most of the computing is probably in the base. The bus can be extended, which is what the robot is doing right this moment. It’s extending its own potential height. And these peripherals, mechanically, can ride up and down on the bus. And this seems like a very nice modular way to do this. You can add more arms or more sensors or different kinds of sensors. And the robot, most of the time, can do that itself. If it has one arm, it can use it to install on itself other things.
[???]
Well, the idea is that it shouldn’t fall over. It probably can, actually. I know the Aibo robot, that’s one of its best tricks, is to stand up. So I think so. Certainly this shaft would have to be strong enough to achieve that. And all you need is an arm sticking out at the right angle. But, in fact, it should have a really good sense of balance. And it should never get to a point where it’s tipped over that far.
So I see that this is the beginning of a long evolution—several decades at least, about the same time scale as the speakers have all been talking about; say, thirty years, thirty, forty years even. And in that, these machines become gradually more capable. The first generation simply runs application programs. The second generation, maybe ten years later, has conditioned learning ability. The first generation, by the way, is comparable to a lizard on that scale. The second generation to a mouse, and it can learn to tweak its actions. The individual steps in the application program have alternatives, and those are modulated on the basis of learning that happens internally, because there are modules that detect good and bad conditions and send conditioning signals. A third generation that is able to model the world both physically, culturally, and psychologically, and use the simulation of the world to rehearse its actions so it can learn mentally before it does things physically and learns faster, can also imitate. And we’re down to the fourth generation, which takes the third generation, which provides a physical to symbolic conversion in the simulator, because the simulator has to be kept tuned to the external world. Because if simulations don’t act the same way as the actual physical performance later, then the simulation is tweaked, and the simulation therefore provides ultimately a good model, which is however labeled in terms of cultural properties, what things are, what their name is, and how they’re usually used, psychological properties, how various actors in the world feel about things, which is important because you don’t want to do something to a human being (“you” meaning the robot), or even some other robot that will cause negative conditioning in them.
Well, there you go. And in fact, you’d want the conditioning module to accept human input, so you say: just don’t do that. But also it should know that you like to watch the X-Files, you see, so it has to have an idea of some of your likes. And, in fact, the third-generation robot, you can have a simple-minded conversation with about how it feels, and how you feel, and what its plans are for the day and what you’re, you know. But it doesn’t understand anything about the outside of the house, just concrete things and concrete places. So the fourth generation adds a generalization layer to that, a reasoning program, which takes abstracted information out of the simulator, reasons about it, and then reinstantiates it into actual simulations. In fact, it uses the simulations to help with its abstract reasoning, because it can go make an abstract plan, instantiate it, and if it doesn’t work out in the simulation, then obviously something’s wrong with it too. So, here we go. And… thank you!