What will it be like when machines make and execute decisions without any human intervention? Why would we make such systems and what are their implications for the future of human judgment and free will?
Keeping Humans in the Loop
Hundreds, if not thousands, of science fiction stories tell us it’s a bad idea to build automated systems without “Human-in-the-loop” (HITL) processes for keeping them in check. In real life, the need for human intervention before executing an automated process is most obvious when it has serious, irreversible consequences: like killing a person with a drone.
HITL takes many forms. In high stakes situations like drone strikes, humans make the difficult judgment call before the weapon’s deadly automation kicks in. In lower stakes situations, humans spot check systems and do mop up on their limitations during and after the automation. Seasoned customers at Amazon Go automated retail outlets make no contact with employees, simply grabbing what they want from the shelf and walking out. But some shoppers get confused, so Amazon has a few employees
Cutting the Cord
Given the benefits of keeping Humans In The Loop, why do systems designers keep pushing so hard to design Humans Out Of The Loop? The simple answer is that, over the long-term, “autonomous systems” shrink labor costs. They also increase profits by boosting revenues through quality improvements and greater operational scale. In fact, our most fundamental assumptions about the design of autonomous systems are driven by a kind of “code within the code” that is deeply shaped by our economics.
Though it is the focus of much of my writing, this article sets aside ethical and societal implications of “cutting the cord” on these autonomous systems, focusing instead on how they are likely to unfold over time.
Predict, Judge, Act
Today’s machine learning systems are essentially automated systems for making predictions. This is the point made by Ajay Agrawal, Joshua Gans, and Avi Goldfarb in their excellent book, Prediction Machines.
One of the book’s useful contributions is its insight that decision making consists of prediction and judgment. Deciding to bring an umbrella to work means knowing the weather forecast as well as forming some judgment over the benefit of having an umbrella if it does rain and the cost of carrying it around all day if it doesn’t. The authors sketch out a useful model for understanding the relationship between prediction and judgment while pointing out that the increased supply of predictions brought about by machine learning will also increase the demand for judgment.
Judgment is Tricky
When we make predictions, we draw on a sample of data, hoping it tells us something about larger patterns beyond the sample. It’s a bit like scooping a ladle of chicken soup to assess the characteristics of the whole soup within the pot. The challenge is that such samples often lack the “
Predictions and judgments are
The other tricky thing about judgment is that it relies on our underlying values and value is always in the eye of the beholder. Even if I know with absolute certainty it will rain today, I may still choose to leave my umbrella home because I’m from Seattle and that’s just not how we roll here.
Humans excel at embedding intelligence into artifacts — and one of our favorite things to embed is judgment. Legal codes are an obvious example of the way we embed judgments into stone tablets, books, and databases.
We also embed judgment into artifacts through the much more common practice of design. The stone tools shaped by early hominids millions of years ago are essentially a record of human judgments about how to cut, scrape, and pierce — embedded in designs. Vehicles aren’t yet fully autonomous because we still need humans to intervene for many judgment calls, but Tesla, Uber
While vehicle safety goals are fairly objective, the goals of an autonomous system can also be quite subjective. Good judgment often requires understanding what the end-user actually values. Researchers at Google’s sister company, DeepMind, are streamlining the capture of end-user preferences through a variety of techniques and are using those preferences to train machines to drive an old Atari racing game in playfully human ways.
Workings of an Autonomous System
While there are undoubtedly limitations, we are on track to build automated judgment for a variety of applications. What follows is a framework for seeing how automated judgment serves as a bridge for fully autonomous systems:
The above graphic represents systems that are both intelligent and automated. The top left and middle portion reflects the intelligence of the system. This machine intelligence connects to automation in the lower right, the impact of which is then evaluated as feedback for retraining and further optimizing the system.
The intelligence of an autonomous system grows through a two-step process. Sensors draw in data samples to train algorithmic models to do things like recognizing a human face. After the training, the model can then be used for large-scale inference work, like recognizing millions of people on Facebook. You can learn more about this distinction between training and inference here: A Handy Way to Think About Machine Learning.
As we’ve seen already, in order to make actual decisions, autonomous systems must be able to train models for both prediction and judgment and then use those models to do inference work and actually generate decisions at scale.
Once the system has a decision about what to do, it then executes additional logic on how to do it. Amazon algorithms predict products you may like and then judge how best to respond to your actions on the site. Additional code is then called to automatically render the appropriate information on screen. In the same way, a self-driving car predicts a dog ahead on the road, judges that it must stop immediately and sends instructions to the brakes to automatically make that happen.
Once the automated response has occurred, the system uses additional sensors to assess the impact of its actions on its desired goals. The information then creates a feedback loop that initiates tweaks in the prediction, judgment and execution algorithms. This kind of feedback loop is known as reinforcement learning.
A Spiral of Intelligence
The complexities surrounding prediction and judgment mean that fully semiautonomous systems will remain far more common than fully autonomous systems over this next decade. As we overcome these obstacles, we will set in motion an explosion of planetary intelligence.
Intelligence and automation feed upon one another. Automation improves the value of products and services, increasing end-user engagement and the flow of data into these systems. This sparks more intelligence, better automation, and more iterations of positive feedback: an upward spiral of intelligence and automation that grows into brilliant autonomous systems organized around meeting human goals.
In the decades ahead, humans will face difficult decisions about how much judgment is best handed over to machines. In Machines of Loving Grace, John Markoff relates the history of the artificial intelligence field by highlighting two camps: one focused on replacing humans with artificial intelligence (AI) and the other enhancing humans with intelligence augmentation (IA).
Machines enable us to pool our judgment as collective intelligence far more effectively than we did with arrowheads, pyramids, and dictionaries. The coming wave of products and services will take this collective human intelligence to new heights by coupling it with automation. The resulting artificially intelligent autonomous systems will replace a great deal of human work in the process.
At the same time, limits like