in Books

📚 More than a Glitch

I’ve finished reading More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech by Meredith Broussard.

Quote from the book ‘More than a Glitch’ by Meredith Broussard: “ Tech is racist and sexist and ableist because the world is so. Computers just reflect the existing reality and suggest that things will stay the same-they predict the status quo. By adopting a more critical view of technology, and by being choosier about the tech we allow into our lives and our society, we can employ technology to stop reproducing the world as it is, and get us closer to a world that is truly more just.”

The book is a polemic that explores technology, algorithms, machine learning and artificial intelligence and asserts that they are always biased. It has really got me thinking and seeing things in a different way. It reminded me of when I read Ibram X. Kendi’s How To Be An Antiracist in that it has given me a completely new way of seeing the world. Recently I have been reviewing documents on ethical AI and I am now looking at them in a completely different light.

This coded language shows up everywhere once you are attuned to it. Consider this IBM AI governance report, which reads: “Extensive evidence has shown that AI can embed human and societal biases and deploy them at scale. Many experts are now saying that unwanted bias might be the major barrier that prevents Al from reaching its full potential. . . . So how do we ensure that automated decisions are less biased than human decision-making?” This is problematic because it assumes that Al’s “full potential” is even possible, which has no evidence aside from the imagination of a small, homogenous group of people who have been consistently wrong about predicting the future and who have not sufficiently factored in structural inequality. The question of “How do we ensure that automated decisions are less biased?” reinforces this problematic assumption, implicitly asserting for the reader that computational decisions are less biased. This is not true, and IBM and other firms should stop writing things that include this assumption. The technochauvinist binary thinking of either computers or humans is the problem: neither alone will deliver us.

I loved the insights on how inputs into machine learning models come from a world that is inherently biased, which will always lead to tools that are biased in some way. Many examples are given of how the systems that have been trained on this data enforce and amplify the existing patterns. For example, where exams couldn’t take place in 2020 due to the COVID-19 pandemic, algorithms were used to determine pupil grades instead. The book gives examples from the US, but I distinctly remember the fiasco here in the UK. Assigning grades to students based on historic data from their school, or through the use of any other other demographic information, may seem ‘fair’ to those people designing the algorithm. But to any one person being judged by the system it is deeply unfair.

The book explores the use of machine learning systems by the police. Historic data shows where arrests have been made and who was arrested, but not necessarily where crimes have been committed and who did them. This bias creates a feedback loop where predictive technology asserts that future crimes will be committed in similar areas, by similar people.

The thing is, everyone is a criminal to some extent because everyone has done things that violate the law. For example, white and Black people use drugs and deal drugs at equal rates. Bias determines who gets constructed as a criminal; not everyone gets caught, not everyone gets punished, and some people get punished more than others. The unequal application of justice can be seen in crime maps. Look at a crime map for any major city, and it’s pretty much the same as the map of where Black people live. Again, not because Black people commit more crimes, but because the things we call “crime maps” are actually arrest maps, and Black people are arrested for crimes at a higher rate. When you train algorithms on crime data, you are training the algorithm to over-police certain zip codes or geographic areas, because that is what has happened in real life in the past. You are training the algorithms to be biased.

There’s a fantastic example where someone has put together a ‘White Collar Crime Risk Zones’ tool which identifies ‘hotspots’ in a similar way to other systems. For New York City you can see that the major ’risk areas’ are clustered around the financial districts.

Screenshot from the website ‘White Collar Crime Risk Zones’. A map of New York City is shown, zoomed in to show parts of Manhattan, Brooklyn and Queens. Yellow and red ‘clouds’ are on the map to show zones of white-collar crime risk, clustered around the Financial District and Midtown Manhattan. Brooklyn and Queens have almost no yellow or red blobs.

Broussard asserts that people coming from the data science/technology world often assume that they can use their tools to get insights in whatever field they are applying them to, without considering the long history, large body of work and experts that have been in this space for many years before them:

One of the big misconceptions of data science is that it provides insights. It doesn’t always. Sometimes the insights are merely things that the data scientists didn’t know, but people in other disciplines already knew. There’s an important distinction between what is unknown to the world versus what is simply unknown to you. Data scientists in general need to do more qualitative research, and talk to experts in relevant fields, before designing and implementing quantitative systems.

I loved the insight that designing tools for inclusion actually makes them better for everybody. It got me thinking about the minimal effort that I have been putting in to adding alt-text to images on this website. The tools I use for blogging don’t help me but I know there will be a way to do it. I’ll try harder. It’s not really acceptable that images are inaccessible to vision-impaired readers in 2024.

Useful innovations like the typewriter, text messaging, audiobooks, remote controls, wide rubber grips on kitchen tools, voice assistants, and closed captioning all stem from designs for disability. “When we design for disability first, we often stumble upon solutions that are not only inclusive, but also are often better than when we design for the norm,” Roy said. “This excites me, because this means that the energy it takes to accommodate someone with a disability can be leveraged, molded, and played with as a force for creativity and innovation. This moves us from the mindset of trying to change the hearts and the deficiency mindset of tolerance to becoming an alchemist, the type of magician that this world so desperately needs to solve some of its greatest problems.”

Although I found the writing style quite dry, I’m very glad I picked this book up. I’m going to be thinking about its insights long after I’ve put it down.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.