Homo Deus: A Brief History of Tomorrow (My Notes)

When I run across a book that is worth coming back to later, I often write up notes, and sometimes I share those publicly. That is the case with Yuval Noah Harari’s latest book, Homo Deus: A Brief History of Tomorrow. The book is a follow up to his earlier work, Sapiens: A Brief History of Humankind.

These notes will give you a good understanding of what Harari covers, but do nothing to capture the way in which he writes, which is quite engaging. I highly recommend reading this book for yourself.

Overview:

The overarching theme of the book is that of humanism. It traces how we see ourselves as fundamentally different from other animals and how we came to embrace the humanism creed. Its conclusion is that our technology is likely to finally break the humanist spell and replace it with a new religion: Dataism.

The New Human Agenda:

The original human project centered on survival in the face of famine, plague and war. As we overcome these challenges, we now replace them with a new agenda, focused on immortality, bliss and divinity.

From Animist to Deist to Humanist

Most early humans were animists who embraced their fellowship with other animals. Many early people see humanity as descending from snakes, and Eve even means “female snake” in most Semitic languages. We existed in this animist culture until the emergence of agriculture, when we shifted to a deist perspective, wherein animals were placed in the world for our benefit. With the Scientific Revolution, humans replaced the gods as the focus of worship, and humanism flourished. Later in the book, Harari makes the case that worship of data is now replacing worship of humans, leading to a shift towards Dataism.

Intersubjective Reality: Our Special Sauce

The belief that humans have eternal souls, whereas animals do not, is a cornerstone of our legal, political and economic systems. But Darwin shows that the closest thing to a human essence is our DNA, which is, essentially, data. What enabled humans to conquer the world wasn’t unique possession of consciousness, because animals have that too in varying degrees. What was unique was our ability to cooperate in very large numbers. Intersubjectivity depends on communication among many humans, and it creates a web of meaning — a shared understanding of reality. This intersubjective web of meaning is what organizes society and holds it together.

How do humans create meaning in the world? Storytellers are the weavers of our intersubjective reality. The Sumerian gods once fulfilled a function similar to today’s corporate brands, functioning as legal entities that could own fields and slaves, and execute various commercial transactions such as paying salaries and giving and receiving loans. For the Sumerians, the gods were as real as Google and Microsoft are for us today. These entities formed the subjects around which the narratives of intersubjective reality emerged.

Fact and Fiction Create “The System”

With the advent of writing, what was once managed in the heads of individuals could more easily be shared across very large networks of people, with each person acting as a small step in the massive algorithm that is the essence of bureaucracy. This is what we describe when we feel we are part of ‘the system.’ As the system corrects the mistakes that it makes through the process of documentation, it becomes increasingly precise and accurate over time.

There is a tradeoff between this kind of clear-sighted accuracy and the inspirational power of simplifying myths. Really powerful human organizations often rely on forcing fictional beliefs upon their members. These fictions enable us to more easily cooperate through intersubjective meaning. They are the basis of things like money, states and corporations. When we forget that they are just fiction, we sometimes run the risk of losing touch of reality.

Blind faith in the stories undergirding our intersubjective reality often leads to over investing in fictional gods, nations and corporations — at the expense of individual human beings. “Religion is any all-encompassing story that confers superhuman legitimacy on human laws, norms and values.”  It legitimizes our stories by moving them into a superhuman realm. Religion is a deal. Spirituality is a journey. Religions are closed systems: a complete description of the world with well-defined contracts and predetermined goals. Spirituality, in contrast is more open: a mysterious journey into the unknown. Science is typically seen as at odds with religion, but modern history is a partnership between science and one particular form of religion, called humanism.

Modernity: Trading Meaning for Power

At its heart, modernity is a deal: humans agree to give up meaning in exchange for power. Until modern times, we believed that humans played a special part in a great cosmic plan. Modern culture rejects this, replacing it with a purposeless universe full of sound and fury but signifying nothing. We stand on the verge of omnipotence, but teeter on the edge of a void of meaninglessness. Our core problem is that we think we can have power without paying the price of meaninglessness. We have raised economic growth as our new source of meaning, as a kind of religion, while placing it above all other values. Economic growth has given us great power, but how have we staved off the threat of meaninglessness?

The key to maintaining our meaning has been to strip it of its reliance on participation in the grand cosmic plan. The humanist religion worships humanity instead. It is the experience of our humanity that gives us meaning. Our inner experiences don’t just create meaning for ourselves, but for the world more broadly. In earlier times, it was God who could define goodness, righteousness and beauty. Today, those answers lie within us. Our feelings give meaning to our private lives but also our social and political processes. Beauty is in the eye of the beholder, the customer is always right, the voter knows best, if it feels good do it, and think for yourself: these are some of the main humanist credos.

Knowledge is (Human) Experience

The ways that we gain truth have shifted with humanism. In medieval Europe, Knowledge = (Scriptures) * (Logic). With the Scientific Revolution, Knowledge = (Empirical Data) * (Mathematics), and while this was a step forward in power, it was unable to deal with questions of value and meaning. As a result, humanism approaches the acquisition of knowledge as: Knowledge = (Experiences) * (Sensitivity). The subjective phenomenon of experience includes sensations, emotions and thought, and sensitivity is about both paying attention to these factors and allowing them to influence us. Humanism sees life as a process of inner change, moving from ignorance to enlightenment through the incorporation of experiences. As a result, it is a breadth of experience that leads us to wisdom.

Liberal, Socialist and Evolutionary Humanism

Humanism is divided into three main branches. Liberal humanism, or simply liberalism, holds that each human being is unique, with a distinct inner life and stream of experiences that are fundamentally not replicable. For socialist humanism, where self-exploration is indulgent and a bourgeois trap of capitalism, there is a strong preference for socio-economic analysis that takes the group into consideration, with a strong focus on collective institutions. Evolutionary humanism has its roots in Darwinian evolutionary theory, where conflict and natural selection evolve humanity into stronger and fitter beings, and ultimately give rise to superhumans. The Twentieth Century was largely a battle between these different branches of humanism, with liberal humanism looking like it would ultimately lose right up until the 1980s. Ultimately, the liberals’ success came about because they best adapted to the rise of the Information Age.

Undermining Liberal Humanist Assumptions

The question that arises is whether the emerging technologies of the 21st Century will make our humanist assumptions obsolete. Many assumptions of liberal humanism don’t stand up to rigorous scientific scrutiny. Free will, for example, imbues the humanist universe with meaning, but modern genetics call this assumption into some doubt. Brain scans of people’s decision making processes also show rather convincingly that most of our choices are actually unconscious, even if they feel conscious. What’s more, what free will we do have is likely to be increasingly vulnerable to manipulation by drugs, genetic engineering or direct brain manipulation. The idea that there is even a single, individual self, rather than a plethora of internal voices competing for attention, is also coming under fire. Even at the most basic level, our left brain and right brain undergo a constant tug of war. The left brain, or narrating self, works very differently from the right brain, or experiencing self, with very significant impacts on our experience of reality. Our narrating self is what shapes the story we tell ourself about our life and its meaning.

Liberals uphold free markets and democracy because individuals matter and free will matters. Three societal shifts might change this:

  1. As intelligence is decoupled from consciousness, humans might lose economic and political value as they lose their economic and military usefulness.
  2. The system might still value humans collectively, but not individually.
  3. Upgraded superhumans might become valued more than the average citizen.

Loss of Human Economic Value

The idea that humans will always have unique value beyond non-conscious algorithms is wishful thinking, because: organisms are just algorithms shaped by evolution and because algorithmic processing doesn’t depend on the material that it uses. The way for technological unemployment has been paved by humans professionalizing and specializing in ways that lay the groundwork for their work to be easily migrated to algorithmic processes. As human employment is eliminated, wealth and power might become concentrated into the hands of a very small elite of algorithm-owning individuals. The algorithms themselves might even become their own owners, just as corporations today are their own legal persons. Today, much of the planet is owned by non-human intersubjective entities. These companies and nation states operate in much the same way as the Sumerian gods of old.

Crowds, Not Individuals

The second threat to humanism is that humans may matter in aggregate, but not individually. Science now suggests that we are less unique individuals than composites of different algorithms that do not really possess free will but are instead shaped by genetic and environmental factors. What’s more, external algorithms may well know us better than we even know ourselves. As this happens, the unique value attributed today to individual consumers and voters will fade. People will no longer see themselves as autonomous beings, but rather as nodes in a network of digital algorithms. These algorithms won’t revolt and enslave us so much as smother our individuality in a blanket of usefulness. And in exchange for this usefulness, we will supply the network with all the data it needs to continue to augment its intelligence.

Unequal Upgrades

Splitting humanity into biological castes of the upgraded and the not will destroy the assumption of equality that is a cornerstone of liberal humanism. The elites are likely to maintain a continued advantage over others, always staying a few steps ahead.

Techno-Humanism

The replacement for humanism could take one of two forms.

“Techno-humanism” sees Homo sapiens as having reached the end of the line and needing to be augmented to a new superhuman model: Homo deus.  Seventy thousand years ago, Sapiens underwent a cognitive revolution that enabled us to experience intersubjective reality. A second cognitive revolution would accomplish the superhuman ends of the evolutionary humanists through genetic engineering, nanotechnology, and brain-computer interfaces. Revamping the human mind is an extremely complex and dangerous process. The process of upgrading may actually dumb us down in the process, increasing collective human intelligence while harming individual intelligence and independence. Technology doesn’t necessarily want to listen to our inner voices; it wants to control them to make us fit more effectively into the system. In this scenario, the sacred human experience is converted into just another designer product.

Dataism

“Data religion” argues that humanity has reached the end of the line and that it is now time for a new entity to take our place. Dataism sees the universe as data flows and the value of any phenomenon as its contribution to data processing. With its focus on algorithms, dataism collapses the barrier between animals and machines. It expects electronic algorithms to eventually decipher and outperform biochemical algorithms.

With Dataism, competing ideologies are simply different approaches to data processing. Capitalism is distributed economic data processing, while communism is centralized. Democracies are distributed political data processing, whereas dictatorships are centralized. Politics is increasingly unable to keep up with the flow of decision making, which is why most of today’s government is more administration than leadership, and why innovation may be best left to the markets.

From the Dataist perspective, the entire human species is a single data-processing system, with individual humans acting as its chips. We increase the performance of the system by: increasing the number of processors, increasing the variety of processors, increasing the number of connections between processors, and increasing the flow of data between through those connections. Increasing the number of processors happened with the first Cognitive Revolution, as humans increasingly coordinated themselves through various types of connections. The increased variety of processors occurred with the onset of the Agricultural Revolution and the kinds of trading and specialization that it enabled. Increased connections between processors happened with the invention of writing, money and eventually the emergence of the Scientific Revolution. The flow of connections increased with the emergence of a global data-processing system and the communications systems that preceded it.

The Internet-of-All-Things

As all this data processing accelerates, a new system, the Internet-of-All-Things, will emerge and the need for Homo sapiens will vanish. Like Capitalism, Dataism is now mutating into a religion. In this view, humans are merely tools for creating the Internet-of-All-Things, which will eventually leave the planet and spread out to pervade the galaxy with an omnipotence like God. In this view, we mustn’t leave any part of the universe disconnected from the Internet-of-All-Things. Freedom of information is central to Dataism and not in the sense of the right of human access to information, but rather the right of information to circulate freely. Just as free-market believers trust in the invisible hand of the market, Dataists believe in the invisible hand of the data flow. Being part of this flow, we are something bigger than just ourselves. Humanists believe that experiences occur within us and give us our source of meaning. Dataists believe that our experiences mean nothing unless they are shared in the great data flow.

The Sacredness of Data

In early humanism, people kept on believing in God by arguing that humans are sacred because we are created by God. Eventually, the focus just shifted to humans. Today, with Dataism, we will first argue for sacredness of the Internet-of-All-Things because it serves human needs. But eventually, the focus will just shift to the sacredness of data. Whereas in humanism, we have learned to listen to our feelings, we are now learning that our algorithms can know us better than our own feelings, and so we will learn to trust algorithms over ourselves.

Losing Ourselves

We must ask ourselves what might be lost by replacing conscious intelligence with superior non-conscious algorithms. But even if Dataism is wrong and organisms aren’t just algorithms, there is no guarantee that Dataism won’t still take over. Dataism will likely start off by serving the human pursuit of health, happiness and power, but once authority shifts from humans to algorithms, humanism may lose its relevance. Dataism then, threatens to do to Homo sapiens what we have done to other animals, plugging us into a global network and evaluating us based solely on our contributions to the value of that network. Once we lose relative value in that network with the rise of algorithms, we may turn out to be just a passing ripple in the cosmic data flow.

The book closes with three questions:

  1. Are organisms really just algorithms, and is life really just data processing?
  2. What’s more valuable — intelligence or consciousness?
  3. What will happen to society , politics and daily life when non-conscious but highly intelligent algorithms know us better than we know ourselves?

11 thoughts on “Homo Deus: A Brief History of Tomorrow (My Notes)”

  1. Bill Ed Abraham

    Very interesting summary. I think I want to read the book, even though it sounds like very heavy reading!. Is there a need/benefit to read “Sapiens” first?

    1. To be honest, Bill, I think that Sapiens is the better read. So, if you’re limited on time, I would start there. That said, if the future is more the focus of your interest, you could definitely read Homo Deus without having read Sapiens.

  2. You mean in the near future I won’t need God and I won’t die? WHERE can I get a copy of this book!? As an electrical/computer engineer for a fortune 50 company, I’ve been designing and building computers and managing data for most of my life… pre and post internet. Anyone who truly understands technology is aware of how fragile it is. Just one example… our power grid has been recently brought to it’s knees here in the USA, (we consider ourselves a technically advanced nation) with very little provocation. Anyone putting all of their faith in technology and humans better get back to church. It’s interesting to me that the author of the book being mentioned is a historian… as history has proven over the millennia that God has and always will have the final say… NOT man.

  3. Pingback: You Want to Cry About Inequality? I’ll Give You Something to Cry About | al fin next level

  4. Excellent summary. It is not easy to distill key points of such information dense material. I see a lot of overlap with cooperative economics. Especially between “stories” and the “choreographer” of cooperative economics, which is implemented using the “correlating device” of cooperative game theory. This is the key question I have for Prof Harari. What do you see as the key differences between the stories mechanism and social norms that are internalized by education and other forms of cultural institutionalization?

    Being that you have summarized his views so well — could you give me your opinion of what he might answer?

    Here is an article that summarizes the key ideas of cooperative economics:

    https://medium.com/the-internationalists-journal/key-lessons-from-a-cooperative-economics-a-synopsis-9a656ee28ca9

Your comments are welcome here:

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top