When Toilets Reveal Class Bias in Machine Learning

Reading Time: 1 minute

Machine learning isn’t just for customizing the Facebook newsfeed or helping us pick the right hair dryer on Amazon. It can also be used to help make the world a better place. One of the earlier applications of machine learning to this end is by using it in data analysis.

But what happens when the data itself is biased? That’s exactly what researchers at Alto Analytics found when they undertook a machine-learning image recognition analysis of toilets to help assess unsafe sanitation conditions around the globe.

This revealed that 30.3% of the world’s toilets cannot be recognized by artificial intelligence, based on the images available on gapminder.org. Alto analysts found a direct correlation between AI results and family income, as the algorithm struggled to identify toilets from low-income families.

We used AI to study the world’s toilets. This is what we found

Thank you to Michel Bauwens for bringing this to my attention on Twitter.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Sign up here for the latest articles. You can opt out at any time.


Subscribe by email:



Or subscribe by RSS:

Subscribe-by-RSS---Orange-Background