Awareness of bias in datasets and downstream applications is commonplace, but what about bias impacting the value systems developed to protect the data and algorithms? I recently listened to an interview with MIT Associate Professor Catherine D’Ignazio, who explores this issue through intersectional feminism. The podcast was great, but the book Data Feminism was incredible.
I’m delighted and ashamed at how many moments of insight I had while reading it. A stand out to me was a discussion early on in the book on ethical frameworks and how current conceptions fall short of addressing the root cause of why bias exists in the first place. Look at any framework; I bet you’ll see words like ‘fairness’, ‘accountability’, and ‘transparency’. All great ideals, but they generally set the source of bias in individual people and design choices, ignoring the structures that allow injustice to exist in the first place. The authors introduce adjacent concepts to six common values we’re used to seeing that acknowledge the systems of oppression that allow bias to manifest. Adjacent to ethics is justice, to fairness is equity, to transparency is reflexivity, and to accountability is co-liberation, to name a few.
This last one was particularly thought-provoking as I read the book while Australia chose to vote down The Voice referendum. Data Feminism is one of the best books I’ve read. If you’re sceptical about how we can apply feminism to data science or AI, I encourage you to read the first chapter at least.