All quotes from Daniel Schmachtenberger’s

The question is: where it is not a synergistic satisfier, where there are zero-sum dynamics that are happening, the things that are progressing are at the cost of which other things? And we’re not saying that nothing could progress in this inquiry. We’re saying: are we calculating that well? And if we factor all of the stakeholders—meaning not just the one in the in-group, but all of the people; and not just all of the people, but all the people into the future; and not just all the people, but all the lifeforms; and all of the definitions of what is worthwhile, and then what is a meaningful life, not just GDP—then are the things that are creating progress actually creating progress across that whole scope?

When the new technology emerges—those who use it—if the technology confers competitive advantage, it becomes obligate. Because whoever doesn’t use it, or use at least some comparable technologies, loses when you get into rivalrous interactions.

If anybody starts to cost resources properly, price resources properly—meaning pay for what it would cost to produce that thing renewably via recycling and whatever it is, and not produce pollution in the environment through known existing technology—they would price themselves out of the market completely relative to anyone else not doing that. So either everybody has to, or nobody can, right? And whether we’re talking about pricing carbon or pricing copper or pricing anything, as you say well, we price things at the cost of extraction plus a tiny margin defined by competition, and that was not what it cost the Earth to produce those things, or the cost to the ecosystem and other people of doing it.

A key change to the story is: it can no longer be that the winners can win at the expense of everybody else. It is that we’re actually winning at the expense of the life support systems of the planet writ large.

There are models that win in the short term, but that actually move towards comprehensively worse realities and/or even self-extinction; evolutionary cul-de-sacs. And I would argue that humanity is in the process of pursuing evolutionary cul-de-sacs, where the things that look like they are forward are forward in a way that does not get to keep forwarding. And at the heart of that is optimizing for narrow goals, and at the heart of that is perceiving reality in a fragmented way.

What it takes to maintain that civilizational system will destroy the biosphere the civilizational system depends on, so we must re-make the civilizational system fundamentally. We do need civilizational systems, we do need technological systems, but we need ones that don’t have embedded exponential growth obligations. We do need ones that have restraint, we do need ones that don’t optimize narrow interests at the expense of driving arms races and externalities. We do need ones where the intelligence in the system is bound by and directed by wisdom.

There is a dialectic between a traditional impulse and a progressive impulse. The traditional impulse basically says: I think there were a lot of wise people for a long time—wise and smart people—who thought about some of these things more deeply than I have, who fought and argued, and that the systems that made it through evolution, that made it through, made it through for some reasons that have some embedded wisdom in it that I might not understand fully, and it makes sense for me to kind of have, as my null hypothesis, my default, trusting those systems. They wouldn’t have made it through if they weren’t successful, didn’t work. And likely, the total amount of embedded intelligence in them is more than I’ve thought about this thing. Without knowing it, that’s the traditional intuition. The progress intuition is: collective intelligence is advancing, built on all that we have known. We’re discovering new things, and we’re moving into new problem sets where the previous solutions could not possibly be the right solutions, because we have new problems. So we need to have fundamentally new thinking. Obviously, these are both true.

The individual human is not the unit of selection in evolution. The tribe is.

The unit of selection—that is driving the dominant feature for sapiens—the unit of selection is a group. That’s actually a really important thing to think about as opposed to that the unit of selection is an individual, because we have such an individualistically-focused culture today, and we think in terms of individual focus way excessively to the actual evolutionary fitness of an individual outside of a tribe.

Tools give the capacity to do certain things better, to achieve certain goals better. And insofar as they do, the humans that use them win at things, and as a result everybody has to use them or they kind of lose. So, one: the tools become obligate.

Tools code different patterns of human behavior. Now I’m doing the behavior where I’m using that tool as opposed to doing some other thing. Encoding that pattern of human behavior changes the nature of human mind and societies at large. So it is not true that you’ve got values, and then tools are just neutral. The use of the tools changes the human mind individually and in aggregate, and then becomes obligate.

There is a perverse incentive in general for those who are more focused on opportunity than risk, and as a result we get all the opportunities and all the risks.

The goal here is not an AI overlord that runs everything, it is: how do computation and intelligence capacities make new collective intelligence possible such that you can have the global coordination to prevent global catastrophic failures, but where that system has checks and balances in it so you don’t have centralized power, coordination, and corruption failures?

The cybernetic intelligence of corporations and nation-states and the world as a whole is already a general autonomous superintelligence—running on all the humans as general intelligences (rather than running on CPUs), but also using all the CPUs and TPUs and GPUs—in service of the collection of all the narrow goals.

If you wanted to make a superintelligence that was aligned with the thriving of all life in perpetuity, the group that was building it would have to have the goal of the thriving of all life in perpetuity—which is not the interest of one nation-state relative to others, and it’s not the interest of near-term market dynamics, or election dynamics, or quarterly profits, or finite-set metrics.

If you have a group that has a goal narrower than the thriving of all life in perpetuity, and it is developing increasingly general AIs that will be in service of those narrower goals, they will kill the thriving of all life in perpetuity.