Portrait of Yuval Noah Harari

Yuval Noah Harari

Author, Historian, and Professor
Born: February 24, 1976

Yuval Noah Harari is an Israeli author, public intellectual, historian, and professor in the Department of History at the Hebrew University of Jerusalem. He is the author of the popular science bestsellers Sapiens: A Brief History of Humankind, Homo Deus: A Brief History of Tomorrow, and 21 Lessons for the 21st Century. His writings examine free will, consciousness, intelligence, happiness, and suffering.

Harari writes about a "cognitive revolution" that supposedly occurred roughly 70,000 years ago when Homo sapiens supplanted the rival Neanderthals and other species of the genus Homo, developed language skills and structured societies, and ascended as apex predators, aided by the agricultural revolution and accelerated by the scientific revolution, which have allowed humans to approach near mastery over their environment. His books also examine the possible consequences of a futuristic biotechnological world in which intelligent biological organisms are surpassed by their own creations; he has said, "Homo sapiens as we know them will disappear in a century or so".

WIKIPEDIA ➦

1 Document

Filter

Sort

Alphabetic

Date

Duration

Word Count

Popularity

Mentioned in 2 documents

Ruben Laukkonen and Shamil Chandaria

A Beautiful Loop

Laukkonen and Chandaria propose that consciousness arises from a recursive brain process involving three key elements: a reality model, competitive inferences reducing uncertainty, and a self-aware feedback loop. This framework explains various states of awareness, including meditation, psychedelic experiences, and minimal consciousness. It also offers insights into artificial intelligence by connecting awareness to self-reinforcing predictions. The authors’ theory suggests that consciousness emerges when the brain’s reality model becomes self-referential, creating a “knowing itself” phenomenon. This recursive process underlies different levels of conscious experience and potentially informs AI development.

John Danaher and Stephen Petersen

In Defence of the Hivemind Society

The idea that humans should abandon their individuality and use technology to bind themselves together into hivemind societies seems both farfetched and frightening—something that is redolent of the worst dystopias from science fiction. In this article, we argue that these common reactions to the ideal of a hivemind society are mistaken. The idea that humans could form hiveminds is sufficiently plausible for its axiological consequences to be taken seriously. Furthermore, far from being a dystopian nightmare, the hivemind society could be desirable and could enable a form of sentient flourishing. Consequently, we should not be so quick to deny it. We provide two arguments in support of this claim—the axiological openness argument and the desirability argument—and then defend it against three major objections.