Machine learning systems as tools of oppression

An introduction to harm that ML systems cause and 10 concrete ways to build fairer ML systems

Photo of Black Lives Matter protesters in Washington, D.C. — 2 signs say “Black Lives Matter” and “White Silence is Violence”
Photo of Black Lives Matter protesters in Washington, D.C. — 2 signs say “Black Lives Matter” and “White Silence is Violence”
Photo by Koshu Kunii on Unsplash

How machine learning systems cause harm

Broader context

How collecting labels for machine learning systems causes harm

Why these harms are happening

When ML systems are used

How ML systems are designed

Whose perspectives are centered when ML systems are designed

Lack of transparency around when ML systems are used

Lack of legal protection for ML system participants

Call to action

Imbalanced scale image — ML system developer & labeling task requester weigh more than ML system participant & labeling agent
Imbalanced scale image — ML system developer & labeling task requester weigh more than ML system participant & labeling agent
There are huge power imbalances in machine learning system development: ML system developers have more power than ML system participants, and labeling task requesters have more power than labeling agents. [Image source: http://www.clker.com/clipart-scales-uneven.html]

How to build fairer machine learning systems

#1

#2

#3

#4

#5

#6

#7

#8

#9

#10

#11

Conclusion

References

Software Engineer (ML & Backend) @ Airbnb. I care about ethical AI/ML, mentorship/sponsorship, and diversity & inclusion. My opinions are my own. [they/them]

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store