technology bias

Machined Prejudice: Three Sources of Technology Bias

Given our long history with tools, the idea that we inject bias into technology isn’t exactly new. What is new is the way that machine learning introduces subtle new forms of technology bias.

Technology Bias: the embedding of a particular tendency, trend, inclination, feeling, or opinion into technological systems

1) Designers and Technology Bias

The most obvious ways we bias our tools is through the assumptions we bring to the design process. Sometimes those assumptions are deliberate, but more often than not, they are unconscious.

My son recently completed an internship at an organization called Disability Rights Washington, which has helped to open my eyes about design decisions that make our buildings, streets, and sidewalks unusable for a portion of society. The clincher for me was a video by Paul Tshuma showing how our emergency preparedness plans typically ignore people with disabilities. In it, Paul is stranded on the top floor of a building as an emergency alarm blares and everyone else has long since made it safely out the building through stairs leading to an emergency exit.

All design decisions are judgments, and as such, convey some form of bias. We often just don’t notice it—especially if a particular tool has been with us for a while.

2) End Users and Technology Bias

Digital media enables us to interact with information in new ways. That feedback loop introduces a new form of technology bias. As end users now participate in creating products and services, they introduce bias through their engagement.

The way we like and share stuff on social media streams, for example, doesn’t just shape our own experience. It also influences what happens to our friends on these networks. Your bias for cute kittens, clever memes and birthday messages increases my likelihood of seeing that stuff in my stream. Our interactions with each another cause the network to become our bias.

The strange thing about end user bias is that radically different types of bias can coexist simultaneously on the same platform. Clusters of hatred and bigotry can thrive right beside communities of love and inspiration. Our engagement fragments us into echo chambers of shared bias.

[Tweet “Our interactions with each another cause the network to become our bias.”]

3) Algorithm Trainers and Technology Bias

Machine learning algorithms learn by interacting with humans, often via services like Google Search and Facebook. What that means is that we humans are training the artificial intelligence that fuels our intelligent devices. What that also means is that human trainers play an important role in determining the values — and biases — of these systems.

Let’s say, for example, that I want to build a machine learning algorithm that recognize images. Let’s also say that to train the system, I select a group of trainers who are all men. The human intelligence passed into this system is thereby skewed towards a male perspective. It might be so biased as to annoy, and possibly even insult, women who to tried to use it.

Training bias is a serious concern. In machine learning, selecting human trainers is a core part of the design process. As more of these systems come online, learning and growing through their interactions with us, we must guard against imbuing them with harmful human bias.

Machine learning now grades essays on standardized tests and automates resume screening in HR systems. It’s important that these systems aren’t trained in ways that favor certain groups of people over others. Police departments across the U.S. are already assessing people’s likelihood of committing future crimes based on a system with a demonstrated racial bias. In a society already wrestling with institutional racism, a thoughtless rush to artificial intelligence could replicate bias on an unimaginable scale that could unravel the very fabric of society.

[Tweet “In machine learning, selecting human trainers is a core part of the design process.”]

Designing Containers of Culture

I believe that artificial intelligence is becoming a container for collective human intelligence. This question of technology bias shows how artificial intelligence also acts as a container for human culture. We’re still in the early days of defining what this container will look like, but Riot Games provides an intriguing hint in its use of machine learning to change the toxic culture of online gaming.

As intelligent systems control more and more aspects of society and our economy, it’s essential that we learn to identify and isolate harmful bias as a proactive part of the design process for any intelligent system. Doing so won’t just weaken the grip of frail human egos. It will strengthen the better angels of our culture.

[Tweet “Artificial intelligence also acts as a container for human culture.”]

Silent house party by Imokurnotok  CC BY-SA 3.0,

4 thoughts on “Machined Prejudice: Three Sources of Technology Bias”

  1. jp

    One person likes a boom-box. Others like iPhones. Where is the bias? Is it “bias”, that other people are hearing one person’s music? How?
    How do you propose to “correct” the opinions of humans without introducing your own bias?

    1. Gideon Rosenblatt – Gideon Rosenblatt writes about the relationship between technology and humans at <a href="http://www.the-vital-edge.com/" rel="author">the Vital Edge</a>. His mission these days is to help his readers see business as the code behind the code of the planet’s next advance in intelligence. He thinks and writes a lot about purpose, value, and equity. Gideon ran a social enterprise called Groundwire for ten years, providing technology and engagement consulting to environmental organizations. Before that, he worked in various stints at Microsoft for ten years, including marketing, product development, as a product unit manager, and as the founder of CarPoint, one of the world's first large-scale e-commerce websites. Fresh out of college, he consulted for US companies in China for four years, and yes, his Chinese is now very rusty. Gideon received an MBA with a focus in marketing from Wharton. He now lives in Seattle with his wife and two boys, and is active on <a href="https://plus.google.com/u/1/105103058358743760661/" rel="author">Google+</a> and <a href="https://twitter.com/gideonro" rel="author">Twitter</a>.

      The bias in that example is really just a designer responding to customer preferences. It’s kind of a silly example, actually. I’m using the boombox, which is something I’d not seen someone using on the street in a very long time, to show how most designers no longer build that product with that use in mind – though they might have a couple decades ago. It’s a design assumption, or bias, that isn’t really consciously stated anywhere. It just happens.

      As for correcting people’s opinions, that’s not what this is about. It’s about building systems with a conscious eye towards the fact that bias is going to creep in through a variety of ways and building options into our services to give people a way of removing those filter biases, if they they so choose.

  2. Great article! Got me also thinking about how will morality play into the ethics of designing/training machine intelligence. Whose morality is the “correct” morality? Will artificial intelligence be used for propaganda or to covertly quell an uprising?

    1. Gideon Rosenblatt – Gideon Rosenblatt writes about the relationship between technology and humans at <a href="http://www.the-vital-edge.com/" rel="author">the Vital Edge</a>. His mission these days is to help his readers see business as the code behind the code of the planet’s next advance in intelligence. He thinks and writes a lot about purpose, value, and equity. Gideon ran a social enterprise called Groundwire for ten years, providing technology and engagement consulting to environmental organizations. Before that, he worked in various stints at Microsoft for ten years, including marketing, product development, as a product unit manager, and as the founder of CarPoint, one of the world's first large-scale e-commerce websites. Fresh out of college, he consulted for US companies in China for four years, and yes, his Chinese is now very rusty. Gideon received an MBA with a focus in marketing from Wharton. He now lives in Seattle with his wife and two boys, and is active on <a href="https://plus.google.com/u/1/105103058358743760661/" rel="author">Google+</a> and <a href="https://twitter.com/gideonro" rel="author">Twitter</a>.

      Thanks, Freya. Good questions. The sad thing is that we’re already seeing it used for propaganda in Cambridge Analytica’s work for the Trump campaign (and others). Quelling dissent is even more scarey.

Your comments are welcome here:Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Exit mobile version