Humans are the Heart of Synthetic Intelligence

A merger between humans and machines is coming, and it’s not what you might think.

When our ancestors first learned to take knowledge out of their heads and embed it in artifacts, something remarkable flickered into existence. Millions of years later, the descendants of those people and those tools are merging. We are turning into a synthesis, a synthesis of human intelligence and machine intelligence.

Artificial intelligence and automation are critical to this new intelligence, but humans also have an essential contribution. We supply this synthetic intelligence with the gift of subjective consciousness. And while this deep tie to humanity exposes synthetic intelligence to our frailties, it is also our best shot at aligning the future of intelligence with a love for life on this planet.

Combining Parts into Wholes

Life on Earth is partially a story of individual parts coming together into new wholes. Economist, Brian Arthur calls it “combinatorial evolution.” Microbiologist, Lynn Margulis talked about different species coming together in symbiosis and used that insight to explain the ancient origins of plants and animals. The essence of these ideas is that, in the right circumstances, parts can come together to form novel, and more complex, organizational structures. Individual cells come together as tissues, wolves as wolf packs, and engines, engineers, and pilots as airlines. Think of it as parts poured into a new container, a container that then functions on a whole new level of complexity.

Communication and coordination are the crucial ingredients for new wholes to arise from what were once disconnected parts. Intercellular communication allows cells to coordinate themselves as tissues, organs, and organisms. Wolves communicate with howls and body language to coordinate the pack. Engineers communicate through schematic designs and pilots through flight instruments to coordinate with jet engines.

Now, humans are becoming parts in a new whole that is cohering into a synthesis of human and machine intelligence. The containers for this new intelligence are a particular type of human organization called “platforms.”

Platforms for Connecting Humans and Machines

Though the term platform is moving from Silicon Valley lingo into more common usage, it is worth clarifying. Platforms are a mix of business model and technology. They use digital interfaces to open organizations to contributions of work and knowledge from beyond just employees. Platforms exist in a growing number of industries but are most common in the service economy, where Google, Facebook, and Amazon are the most famous.

Like all combinatorial evolution, the platform relied on a breakthrough in communications technology. In this case, it was the Internet, coupled with software for coordinating work with external stakeholders. As platform operators made their interfaces increasingly intuitive, they made it easier for us to serve ourselves. Without talking to any employee, I can now search vast repositories of knowledge, watch most any show, or book a flight anywhere I want.

Platforms are becoming humanity’s primary points of contact with machines. No technology touches more people than the platforms of Google, Amazon, Facebook, and Alibaba. It’s not just a question of breadth, either as our connections with these systems are deepening too. Innovations such as Natural Language Processing and emotion recognition technologies make them increasingly intuitive to use. We will soon connect to platforms through augmented reality systems and, eventually, brain-computer interfaces that communicate directly with our brains.

Platforms as Applied Synthetic Intelligence

Platforms allow organizations to work with people at mind-boggling scales. Making sense of the data flowing from this tremendous volume of automated interactions would be impossibly complex without machine learning. It helps companies distill this flood transactions into abstract mathematical representations of people’s points of contact with the platform.

Our engagement with automated platforms creates extremely valuable synergy. Automation allows the company to serve many more people. That results in increased flows of data to fuel machine learning. The machine learning makes the automation smarter and more useful to users. This positive feedback loop creates an upward spiral in artificial intelligence. It’s not the kind of general artificial intelligence we see in science fiction, but something far more narrow in scope: Expedia gets smarter in travel, Spotify in music, and Netflix in viewing entertainment. Platforms are thus an extremely powerful approach to solving domain-specific problems by using feedback from customers and other stakeholders to fuel advances in machine learning and automation.

Platforms Accelerate Machine Intelligence

Platforms are more than just businesses though. To see their truly revolutionary nature, we need to understand them in the larger frame of combinatorial evolution mentioned above. Organizations have long helped us coordinate work, but the massive scale, automation and intelligence of platforms makes them something qualitatively different.

The history of our relationship with technology is analogous to a parent teaching a child. We are constantly teaching technology what we know. Though this teaching takes many forms, it all entails some form of converting tacit human knowledge into explicit knowledge that we can embed into technology. “Tacit knowledge,” is that which subjective human experience can know. “Explicit knowledge” is the subset of what we know that we can actually articulate to others — including machines.

The history of technology is, from this perspective, taking tacit human knowledge and turning it into explicit knowledge embedded in our artifacts. The first waves largely focused on conceptual knowledge, which we stored in clay tablets, books, and eventually databases. Now, we are tackling knowledge that once seemed hopelessly mired in our biology. These new applications in things like facial recognition and speech recognition rely on machine learning and the massive flows of data from the automated engagement of millions of people on modern platforms.

Humanity as the Subjective Core of Synthetic Intelligence

Synthetic intelligence will challenge us to be more discerning in how we focus the human experience. It will help us to concentrate more fully on the cutting edge of human development, while rendering unto technology the things that are best left to technology.

There are clearly advantages in having platforms convert our collective experiences into fuel for machine learning. But human experience is more than a commodity to be harvested and abstracted into machine intelligence. We need our systems to understand that while subjective human experience has commercial value, it also has a kind of value that is precious, unique, and intrinsic—not just a means to other ends but an end in and of itself.

Experience is the essence of a human life. It is what matters most. Were you to wake up one night with your house on fire, your first thought would not be the sterling silver, but your grandfather’s watch and those irreplaceable keepsakes from your childhood. Our most meaningful experiences come in all shapes and sizes: the birth of a child, the thrill of learning something new, the ease of hanging out with a friend by a lake, and the wonder of hearing that song just when you need it the most. When we dive into these deeper pools of meaning, we discover an emotional tenor, a feeling that can only be described as experiential. It can’t be packaged or converted into explicit knowledge. It is, as renowned knowledge expert Michael Polanyi put it, “more than we can tell.”

With synthetic intelligence we will augment, and increasingly offload, much of our thinking and work to machines. This will leave us free to concentrate on ensuring that the subjective conscious experience that rests within intelligent automation is the very best that it can be. In this way, we shift humanity’s focus to the great work of expanding consciousness. We will sharpen our intellect with the help of machines but opening the warm intelligence of the heart is work that is ours to bear. Our destiny, the destiny of humanity, is thus to be the heart of synthetic intelligence.

Humanity in a World of Artificial Consciousness

In closing, let us consider a future where the machine learns the ultimate trick of experiencing for itself. We have absolutely no idea what this might mean. We don’t know what machine consciousness might look like or how it would respond to sharing the planet with humans.

Our best chances for peaceful coexistence will be machine consciousness with a built-in appreciation for other forms of consciousness, like ours. Humans have admittedly set a lousy example in the way we’ve treated most other forms of consciousness on the planet, but perhaps there is a way to teach machines to rise above the prejudices of their progenitors.

Life exists in biological ecosystems, but machines exist in economic and political ecosystems. What runs the ecosystems of machines today is the quest for wealth and power, which is what in turn drives the platforms in transforming human experience into machine intelligence. To help ensure that we can share a future with conscious machines, we need to focus on two fronts. First, we must now extend the way we treat human experience so that our platforms value it both commercially as a means toward collective intelligence as well as intrinsically as a kind of inalienable and sacred right. Second, we must now focus on developing humanity so that it is worthy of the tremendous responsibility it is about to experience as the heart of synthetic intelligence.

23 thoughts on “Humans are the Heart of Synthetic Intelligence”

  1. This article gave me the realization that AI or synthetic intelligence or automation or whatever one calls it, is not threatening to replace us humans, but is giving us the opportunity to do so much more with all the new innovations. I like your term, “is technology ushering humanity to an upgrade or its displacement.” If one thinks about it, without humanity, people wanting or needing things, there would be no need for synthetic intelligence. Who needs a robot cooking burgers if there is no one eating them? Or Amazon with no customers.

    Your topics always require me to read the articles 2 or 3 times, to absorb all of the ideas and concepts. Your illustration of the Tree of Life and the Tree of Knowledge, as actually being one tree, is beautiful imagery filled with meaning.

    1. Thank you, Bill. First, I’m glad you like the tree. It’s something that came to me by switching off my normal rational thinking processes.

      I’m not sure what to do about the density of these pieces. My wife tells me that it’s like rich, dark chocolate and is suggesting that I make them less dense. I’m wrestling with that.

  2. Doug the Duvall Builder

    Hi Gideon – Thanks for sparking more thought. I like you description of humans evolving with machines, to form synthetic environments. In great many ways this evolution can be readily seen by watching the evolution of a city. Instead of thinking about the machine as a CPU in a box, consider it the metropolis we inhabit. Each business and family is a chip in the board. This synergy is true wealth, and grows with ever increasing complexity. Layered like Maslow’s pyramid, from fundamental needs to the highest conscious thought, its all integrated and symbiotic. I see machines, as the tools to build this great city, but not the city itself. Hopefully AI can take us to the next platform, with more integrated community networking. But AI can’t do it alone – it takes a city.

    1. Thanks for dropping by, Doug. I like your alternative way of think about it. You know, in a way, the city functions kind of like a place-based organization of humanity, where as traditional corporations and other types of organizations are more market (or function) based.

      Your point is that it’s the families and businesses that are the real heart of these synthetic, or “built” environment. Technology is a wrapper in this sense. I tend not to think enough in terms of cities and place-based networks of people. This is a good reminder.

      By the way, you might really enjoy this video of Geoffrey West talking about cities as networks. He has studied them in great detail and one of the things he found is that as they grow in complexity, they also speed up. He also goes into a lot of interesting areas like how it is that cities tend to survive over much, much longer periods of time than organizations do. And if you hang in there until the end, you’ll get to see me ask him a question! I know, I know. That’s a huge incentive, I’m sure. 🙂

      Thanks again for stopping by and adding this interesting take to the whole idea of synthetic.

      1. Doug the Duvall Builder

        Thanks Gideon! – Wish my thoughts were original. I got my ideas from reading Paul Collier’s ‘Future of Capitalism.’ He has great insight all through the book, but chapter 7 is dedicated to cities. He says cities create their ‘competitive advantage’ by linking a diverse array of talents. Highly specialized talents, like technology centers, require highly integrated and complex cities to support them. This ‘network effect’ or ‘metropolitan clusters,’ are the crown jewels of a nation, and he argues a public good. Anyways, it would be interesting to see how AI integrates into metropolitan areas, and stimulates growth. Keep on writing – fun stuff to think about – Cheers!

        1. I’m not familiar with Collier, Doug. Sounds interesting. I’ve heard some similar arguments, especially with regard to the breakaway giant cities like New York, London, and the Bay Area. The networks embedded in these cities (and the culture through which they connect) is their source of power. That kind of network density of connections is a relatively rare thing at that scale and it attracts more and more of the best qualified individuals who want to plug into it.

          There are lots of people talking and writing about “smart cities.” And Google is all over it with its <a href=”

          “>Sidewalk Labs, which many people are leery of, given the potential for surveillance.

  3. My concern is in the sentences “Volition, or will, is the underlying force that drives consumer demand.
    It is what sets machines in motion and tells them what to do.”

    The will that we have programmed many of our programs (YouTube, etc.) with is “Profit regardless of consequences”.

    Where do you believe that that will end up?

    1. Thanks for dropping by to comment, Mark. It’s a great question, and it’s almost as though you are anticipating where this series is headed (this article is the first installment in a new “webook” that I’m rolling out this year).

      In about five installments from now, I will be writing about something that I call, “the code within the code.” And this gets precisely to what you are driving at here. Humans are the volition that drives these systems; we are the code within the code. And right now, there are two primary coding categories. One, as you note, is optimized to maximizing returns for shareholders (or “shareholder primacy,” which is something I’ve written about a lot over the years). The other is something that has snuck up more recently with regards to machine learning and automation, even though it itself is very old: power and control. We see that with China, Russia, and increasingly with the U.S.

  4. Having worked for corporations, and frequently cleaned up their environmental messes; I would say that the corporation model of a synthetic intelligence future is dark. A corporation is like an adolescent with one or two overweening drives and a vast capacity for both ignorance and self-deception. If I saw any evidence that better IT were leading to better corporate stewardship, then I’d rest easier. But the largest corporations in the world today (and presumably the ones most ‘successful’ as such, and the ones most capable of implementing ‘wisdom’), continue to make gigantic messes of things in every aspect of their various endeavors (oil companies continue to foster harmful emissions, Boeing makes planes worse, social media companies work to the detriment of the society, etc.). To the extent this is not apparent, I’d attribute it more to the ignorance of the general population than to any improved stewardship by corporations.
    Intelligence without wisdom is a bad seed.

    1. Intelligence without wisdom is a bad seed, indeed. As I mention in the replay above to Mark Waser’s comment, there is definitely a “code within the code” of these systems and right now it is not very nuanced about achieving its goals. One of the interesting questions you raise is about the younger giant firms, firms like Google and Facebook, that are so reliant on the cooperation of their end users. You would think that they would have to be much better global citizens. The reality though is that while they are very concerned about their brand equity, they have so far succeeded in operating largely without much governance from those users. I chalk that up to monopoly power and to the gradual loss of open standards on the Internet as these and other companies enclosed more and more of the commons. It’s a deeper problem than that, of course (and something I’ll be diving into much more deeply in coming installments), but in the cutting edge of technology, this is a huge factor.

      Thanks for dropping by and weighing in, Cade. I really like that “Intelligence without wisdom is a bad seed” line.

    2. Intelligence without wisdom is a bad seed, indeed. As I mention in the replay above to Mark Waser’s @markwaser comment, there is definitely a “code within the code” of these systems and right now it is not very nuanced about achieving its goals. One of the interesting questions you raise is about the younger giant firms, firms like Google and Facebook, that are so reliant on the cooperation of their end users. You would think that they would have to be much better global citizens. The reality though is that while they are very concerned about their brand equity, they have so far succeeded in operating largely without much governance from those users. I chalk that up to monopoly power and to the gradual loss of open standards on the Internet as these and other companies enclosed more and more of the commons. It’s a deeper problem than that, of course (and something I’ll be diving into much more deeply in coming installments), but in the cutting edge of technology, this is a huge factor.

      Thanks for dropping by and weighing in, Cade. I really like that “Intelligence without wisdom is a bad seed” line.

  5. What’s being discussed here seems to me to be a biological metaphor of how species grow and change. Anthropologists and evolutionary biologists, of course, have studied evolutionary effects of the of tools on the development of homo sapiens and other primates for close to 120 years. 50 years ago, Stewart Brand et al, promoted “Co-evolution.”

    What I’ve not seen addressed in these discussions of synthetic systems is what is happening when a part wears out and is replaced or what should happen. As such a part, this of great interest to me.

    What do you, Gideon, and others think about this inevitability. Whether driven by the goal of allocating the fruits of economic productivity to property owners (e.g., the current ethos in the US of “shareholder value” as the supreme goal) or control and stability (e.g., China’s panoptic state), mass systems generally treat their parts as interchangeable and disposable.

    As Patrick McGoohan said in “The Prisoner,” “I’m not a number.”
    But, of course, he was, as are most people not insulated by sufficient amounts of money from the short-term effects — and that nobody is insulated from the long-term effects (building your own biosphere is a trick even for a Bezos or Gates).

    1. Thanks for the great comment. Yes, there are many folks who have talked about this kind of co-evolution. One of my all-time favorites is The Tree of Knowledge: The Biological Roots of Human Understanding by Humberto Maturana and Francisco Varela. They talk about this idea of “structural coupling,” which is essentially how two systems begin to interact with one another in ways that they both benefit over time and eventually morph into one system. This is, in a way, the heart of certain types of emergence.

      But to get to the core of your question, which like some of the other comments here, are almost like a foreshadowing of some of the arguments I will be making in the installments ahead over the next several months. Here’s the challenge. When parts have come together as wholes in earlier phases of Earth’s history, the parts weren’t really conscious of it. Mitochondria and chloroplasts didn’t feel trapped inside the new eukaryotic they likely brought about. Our liver doesn’t sit there and fret about feeling like it’s just “a part of the system, man.” The individual wolf or chimpanzee undoubtedly experiences social tensions as part of their folding into social groupings, but they aren’t acutely aware of their individuality. We humans, however, tend to suffer from a kind of angst and meaninglessness when we feel we have lost to much of ourselves to the whole. And this is the root of the problem we now experience.

      We are being folded into something bigger and that bigger thing is very convenient and powerful. But sometimes being part of that thing — especially when it treats us like we’re just cogs in the machine — makes us feel bad. These systems will have us believe that we are all commodities that can not only be swapped out, but that are increasingly seen as most valuable as just data points in statistically-rendered machine learning models. That sense of ourselves runs directly against some of the political programming that we’ve had, particularly here in the United States and the belief that we are all created equal and endowed with certain inalienable rights. This touches on the territory of the human soul, where we are all unique and invaluable.

      So, that is the tension we are now facing. And this is part of what I will be getting to in around eight installments from now.

      Thanks again for dropping by with the great question.

      The Tree of Knowledge:

      1. I once worked in a company that decided to no longer have a front-desk receptionist. Instead, staff coming and going from the office were required to sign in and out, and the office administrator would periodically check the sign-out list if she had to field a call and advise a caller where the sought-for person had gone (pre cell-phone as you can tell). The PENALTY for not signing out was that the office administrator would mark the miscreant’s name on the sign-out page with a yellow highlighter. That was it, but it really shaped behavior.

        The same thing goes on when we gain or lose followers on Twitter or get reddit karma, or replies to our posts. It isn’t fungible reward, but it is important to us. Being folded into something bigger and more powerful and retaining our individuality and self-worth are not incompatible, as long as the platforms devise elegant little systems that give us all sufficient positive and negative feedbacks to keep us striving for improvement and satisfied that we’re contributing to a bigger thing. We don’t want to feel like we’re being ‘gamed’, but that will be part of the meta-game platforms will learn to play with us. When they do, we will happily go along.

        1. I really like hearing concrete examples like that, Cade. It’s funny how easily motivated we humans are through social engineering like that. We are now living through a time of massive social engineering as we work, as a society, to “flatten the curve” of the coronavirus. Not everyone carries the same assessment of the seriousness of this threat and so voluntary compliance with social distancing wasn’t cutting it — and local and state governments had to step in. I think that has two impacts. One, it affects the behavior of those who simply don’t agree by forcing them to comply. Two, it gives people who were on the edge the nudge they need in order to do the right thing without feeling like they are “overreacting” or acting out of fear.

          We really are part of a bigger social system and this virus is making that interdependence clearer than ever.

          As to your final point, I like where you are going with that. In fact, this is a central theme that you will see in some of the installments to come. How do we operate as a larger whole without giving up what is most special about our individuality?

          Thanks for weighing in with your thoughts, Cade.

  6. HI Gideon – Great article, and I agree 100%. Connectivity spurs growth, and electronic connectivity is the fastest network out there, so should spur the most revolutionary growth. Interesting to see what the effect of Covid-19 ‘social distancing’ is on the network effect. I suspect the physical economy will hit a real slow down, while the virtual one accelerates. Telecommuting, webinars, and Skype are going to become information superhighways…..I also agree that FANG’s method of financing the virtual network, data harvesting, has to face some competition. I’d think the cable companies could buy back market share by offering to bundle a low cost, hack free network, with their subscription services. They have the money and infrastructure to take on FANG, and survive. Time will tell, interesting to see what telecom does to counter cord cutting, or venture capital does to save CenturyLink, et al.

    1. Thanks, Doug. I’m glad you liked the article. It’s amazing how quickly things have changed over the last few weeks. But that is the nature of these kinds of network effects. They seem to come out of number as the connections between us increase exponentially.

      And I do think that we are going to see some lasting changes as a result of this virus. Education in particular could see some shifts from this massive experiment with many of the top schools going to online-only classes.

  7. Jonathan Gossage

    I am sorry to be late to the conversation bur I only encountered this post in a Twitter tweet today. I think that the siruation is more complex and gets to the heart of what it is to be human. I think that there are at least two strands built into human nature; the spirit of cooperation and the spirit of competition. These are separate strands, not ends of a single strand, thus a person might be both highly competitive and higly cooperative (not an usual combination), The potential strength of each strand is defined by genetics but the implementation in an individual is heavily influenced by the ways that genes are expressed, as influenced by environment.

    The addition of platforms and machine intelligence creates a new strand which must be integrated into our lives.To my mind, the integration of the strands of cooperation and competition into human life and their modification by new technology is the real extstential crisis of our day, not situations such as climate change. I strongly believe that the integration of these strands is the fundamental existential crisis facing humans and that failure to accomplish such integration threatens the long-term existence of our species.

    1. I’m glad you popped in, Jonathan. I’ve been writing a ton these days on another project and so haven’t been publishing here on the Vital Edge. I cleaned up this article this morning and so reposted on Twitter. That’s why you saw it.

      That’s an interesting point. I want to make sure I’m following you. Are you saying that humans have these two tendencies and that technology can augment them both and so we need to be careful about what we augment?

      By the way, you may enjoy some research that Martin Nowark did using game theory to analyze the dynamics of cooperation. He learned that there’s no such thing as a stable state of cooperation. When he ran his computer models, the optimal strategy for cooperation continually morphs over time. One approach that works well at the outset ends up losing efficacy over time in response to the behavior of other participants. So, nature does seem to abhor a competitive vaccum.

      Cooperation is Never Perfect

    2. Thanks for dropping by, Jonathan. I haven’t been publishing anything new here on the Vital Edge for some time as I’m working on another (very big) project right now. I made some changes to this piece and so decided to reshare it this morning. That’s why you saw it.

      I think I’m following you on your point about competition and cooperation, but want to make sure. Are you saying that we have these two strands and that technology can augment either one, so we have to be careful about how we apply technology?

      By the way, you may be interested in some research that Martin Nowark did using game theory to analyze the dynamics of cooperation. In short, he learned that there’s no such thing as a stable state of cooperation. When he ran his computer models, the optimal strategy for cooperation continually morphs over time. One approach that works well at the outset ends up losing efficacy over time in response to the behavior of other participants.

      Here’s more on his work.

      1. Jonathan Gossage

        Yes, I am saying both can be significant in the same person and also that technology can affect either for good or bad. I think that you may find the same for competion but the time-frame for change in competition is much longer. Both are driven by the interaction between the individual and the environment abd the environment is under constant change as is individual perceptions of it..

        Thanks for the reference, by the way.

        1. Sure thing. I think a lot of the tension here revolves around trust. That’s one of the reasons that I think that the trust-less coordination enabled by blockchain technologies have the potential to be truly revolutionary.

          Hopefully, in a good way.

Your comments are welcome here:

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top