Autonomous Systems Unleashed

What will it be like when machines make and execute decisions without any human intervention? Why would we make such systems and what are their implications for the future of human judgment and free will?

Keeping Humans in the Loop

Predator Unmanned Aerial Vehicle ground control station
(
Wikimedia Commons)

Hundreds, if not thousands, of science fiction stories tell us it’s a bad idea to build automated systems without “Human-in-the-loop” (HITL) processes for keeping them in check. In real life, the need for human intervention before executing an automated process is most obvious when it has serious, irreversible consequences: like killing a person with a drone.

HITL takes many forms. In high stakes situations like drone strikes, humans make the difficult judgment call before the weapon’s deadly automation kicks in. In lower stakes situations, humans spot check systems and do mop up on their limitations during and after the automation. Seasoned customers at Amazon Go automated retail outlets make no contact with employees, simply grabbing what they want from the shelf and walking out. But some shoppers get confused, so Amazon has a few employees in stores to keep things running smoothly.

Cutting the Cord

Given the benefits of keeping Humans In The Loop, why do systems designers keep pushing so hard to design Humans Out Of The Loop? The simple answer is that, over the long-term, “autonomous systems” shrink labor costs. They also increase profits by boosting revenues through quality improvements and greater operational scale. In fact, our most fundamental assumptions about the design of autonomous systems are driven by a kind of “code within the code” that is deeply shaped by our economics.

Though it is the focus of much of my writing, this article sets aside ethical and societal implications of  “cutting the cord” on these autonomous systems, focusing instead on how they are likely to unfold over time.

Predict, Judge, Act

Today’s machine learning systems are essentially automated systems for making predictions. This is the point made by Ajay Agrawal, Joshua Gans, and Avi Goldfarb in their excellent book, Prediction Machines

One of the book’s useful contributions is its insight that decision making consists of prediction and judgment. Deciding to bring an umbrella to work means knowing the weather forecast as well as forming some judgment over the benefit of having an umbrella if it does rain and the cost of carrying it around all day if it doesn’t. The authors sketch out a useful model for understanding the relationship between prediction and judgment while pointing out that the increased supply of predictions brought about by machine learning will also increase the demand for judgment.

Judgment is Tricky

What about those lemons?

When we make predictions, we draw on a sample of data, hoping it tells us something about larger patterns beyond the sample. It’s a bit like scooping a ladle of chicken soup to assess the characteristics of the whole soup within the pot. The challenge is that such samples often lack the “ergocity” that makes the sample intrinsically representative of the larger population. The chunks of chicken may rest stubbornly and disproportionately at the bottom. 

Predictions and judgments are intimately related. If you know it will rain today with 100 percent certainty, judging whether to bring an umbrella becomes much easier. Without knowing that, you need to make a judgment call. So, one of the reasons judgment is tricky is because we lack perfect predictions.

The other tricky thing about judgment is that it relies on our underlying values and value is always in the eye of the beholder. Even if I know with absolute certainty it will rain today, I may still choose to leave my umbrella home because I’m from Seattle and that’s just not how we roll here.  

Automating Judgment

Humans excel at embedding intelligence into artifacts — and one of our favorite things to embed is judgment. Legal codes are an obvious example of the way we embed judgments into stone tablets, books, and databases.

We also embed judgment into artifacts through the much more common practice of design. The stone tools shaped by early hominids millions of years ago are essentially a record of human judgments about how to cut, scrape, and pierce — embedded in designs. Vehicles aren’t yet fully autonomous because we still need humans to intervene for many judgment calls, but Tesla, Uber and others are now aggregating those judgments and will eventually embed them into future iterations of their software.

While vehicle safety goals are fairly objective, the goals of an autonomous system can also be quite subjective. Good judgment often requires understanding what the end-user actually values. Researchers at Google’s sister company, DeepMind, are streamlining the capture of end-user preferences through a variety of techniques and are using those preferences to train machines to drive an old Atari racing game in playfully human ways.

Workings of an Autonomous System

While there are undoubtedly limitations, we are on track to build automated judgment for a variety of applications. What follows is a framework for seeing how automated judgment serves as a bridge for fully autonomous systems:

The four stages of a fully autonomous system form a feedback loop through which automation feeds intelligence, which feeds automation in an expanding spiral of autonomous intelligence.
————————————— 

Overview:

The above graphic represents systems that are both intelligent and automated. The top left and middle portion reflects the intelligence of the system. This machine intelligence connects to automation in the lower right, the impact of which is then evaluated as feedback for retraining and further optimizing the system.

Machine Intelligence:

The intelligence of an autonomous system grows through a two-step process. Sensors draw in data samples to train algorithmic models to do things like recognizing a human face. After the training, the model can then be used for large-scale inference work, like recognizing millions of people on Facebook. You can learn more about this distinction between training and inference here: A Handy Way to Think About Machine Learning.

As we’ve seen already, in order to make actual decisions, autonomous systems must be able to train models for both prediction and judgment and then use those models to do inference work and actually generate decisions at scale.

Automation:

Once the system has a decision about what to do, it then executes additional logic on how to do it. Amazon algorithms predict products you may like and then judge how best to respond to your actions on the site. Additional code is then called to automatically render the appropriate information on screen. In the same way, a self-driving car predicts a dog ahead on the road, judges that it must stop immediately and sends instructions to the brakes to automatically make that happen.

System Feedback:

Once the automated response has occurred, the system uses additional sensors to assess the impact of its actions on its desired goals. The information then creates a feedback loop that initiates tweaks in the prediction, judgment and execution algorithms. This kind of feedback loop is known as reinforcement learning.

A Spiral of Intelligence

The complexities surrounding prediction and judgment mean that fully semiautonomous systems will remain far more common than fully autonomous systems over this next decade. As we overcome these obstacles, we will set in motion an explosion of planetary intelligence.

Intelligence and automation feed upon one another. Automation improves the value of products and services, increasing end-user engagement and the flow of data into these systems. This sparks more intelligence, better automation, and more iterations of positive feedback: an upward spiral of intelligence and automation that grows into brilliant autonomous systems organized around meeting human goals.

An upward spiral of autonomous intelligence.
—————————————

Augmenting Judgment

In the decades ahead, humans will face difficult decisions about how much judgment is best handed over to machines. In Machines of Loving Grace, John Markoff relates the history of the artificial intelligence field by highlighting two camps: one focused on replacing humans with artificial intelligence (AI) and the other enhancing humans with intelligence augmentation (IA). 

Machines enable us to pool our judgment as collective intelligence far more effectively than we did with arrowheads, pyramids, and dictionaries. The coming wave of products and services will take this collective human intelligence to new heights by coupling it with automation. The resulting artificially intelligent autonomous systems will replace a great deal of human work in the process. 

At the same time, limits like ergocity, predicting unknown futures, and the inherent subjectivity of human value judgments will preserve important roles for humans for quite some time. Machines will help on this front too; assisting humans in making better judgments by overcoming our inherent bias and other cognitive errors left over from the evolutionary path of the human brain. In other words, machines will augment our judgment. The challenge that lies ahead is how to do that without suppressing free will, subjective experience, and that which is most precious about humanity. 

8 thoughts on “Autonomous Systems Unleashed”

  1. The subject of your new article is overwhelming, to wonder where this all ends (or when humanity ends!) When I think of how exponentially things have changed just in the past 10 years, it is mind-boggling to try to project one’s imagination forward. I suppose it’s considered progress, but you recognize my concern very succinctly in your last sentence!

    1. I know what you mean, Bill. The new systems headed our way add to a sense of disorientation that’s been already brewing for a while. They could be used for great good or create more messes. That’s the hard part.

  2. That’s a brilliant overview of where all this is heading. Indeed, prediction is what we do with our brains and in creating autonomous systems to take over from us their ability to predict what comes next is central to their function. That requires judgement (or decision-making) at some point and this is where things become interesting because judgement or decision-making is, as you point out, subject to values and those values arise out of context. Context is always situational or culture-driven. Search, for instance, has a separate semantic search index for each language for that very reason. Arguably, search scales easily because the core technology is the same. Autonomous systems may not enjoy the same cost-saving feature.

    My point is that in order for autonomous systems to fully replace the HITL we’d have to get better at creating general artificial intelligence and on that score we’re some way behind. Nevertheless I think you’re 100% in saying that we shall see partial replacement of HITL soon enough (train driving case in point). Interesting times we’re in!

    1. Thanks, David. Your point about context is important. Context is always changing and that has a huge impact on the values that we express in any given interaction with one of these systems. Context has many variables and cultural context is one of the most powerful – and interesting.

      And yes, we’ve got a very long way to go on many fronts before general artificial intelligence gets us to the cylon level. I think we will see fully autonomous systems emerging, and probably more quickly than we might guess, but it will be in heavily constrained contexts (there’s that word again), where the consequences of failure are easily contained and where the importance of human subjectivity is minimal.

      Thanks for the thought-provoking comment.

  3. I’m curious where you got the lead photo. One of them is tagged wiki media commons but not the others…

Your comments are welcome here:

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top