The Question of Algorithms

Algorithms are the equations that helps us throughout our everyday lives. They can be used when we decide what meals to cook upon their reception or what resources to pick depending on how relevant they are to us. The problem with algorithms in this day and age is that they lack transparency. A lot of what algorithms “do” are hidden behind unknown rules and digital curtains which companies do not make us privy to. Louise Matsakis writes that, “…even the people who build them aren’t always capable of describing how they work. ”1 This statement gives credence to the situation; big companies can set up these algorithms, but they cannot determine how they will work. She later espouses the idea that no one really cares about the code the big companies use, they just want the algorithms to be fair. Algorithms can be “sexist” or “racist” because they have no context and adapt to the bias of user relevance and this ties into that question of fairness, how transparent can we make these algorithms and will people be willing to accept that?

It is for this reason that scholars like Cathy O’Neil and Matsakis caution us to become better in how we consume digital media. O’Neil cautions us to not put blind faith into algorithms.2 While Matsakis wants us to question the algorithms we use in our daily lives. I found O’Neil’s description of an algorithm to be particularly interesting. Simply put, you have a historical data-set and your definition of success. The historical data-set are the things your bringing together like the ingredients in a dish, you go over which ingredients you want to use, and those you don’t. Your definition of success can be if your family likes the meal you prepared, then you have succeeded. This process can then be repeated with your successes also being added to the historical data-set. This kind of algorithm works for you and around you. The algorithms companies use and have AIs run pull from the entire culture for the historical data-set, thus a lot of biases of the past can be impressed upon the present. 

Safiya Noble brings up this exact idea in terms of feminism and black people. This historical data-set has access to the data the entire culture can provide. Thus, these algorithms end up carrying the biases of the past into now. Her personal anecdote3 at the beginning of her writing details this effectively in the sense of the sexualizing of women and how that affected her search on Google. She simply wanted to do something with her nieces but was met with pornographic results. She later goes on to discuss how the Google search algorithm has no social context. It show something that’s biased, something racist or sexist and not even have a concern or programmed thought of the consequences. So, in a vein similar to O’Neil and Matsakis, Noble cautions us about how big companies use algorithms and about their biases and fairness, but she also she carries a sense of activism. Noble brings up topics about movements to get companies to change how these algorithms function and investigates their biases. So we need to take O’Neil’s, Noble’s and Matsaki’s accounts into consideration and begin questioning the algorithms too. 


  1. Matsakis, Louise. “What Does a Fair Algorithm Actually Look Like?” Wired. February 14, 2019. https://www.wired.com/story/what-does-a-fair-algorithm-look-like/?GuidesLearnMore.
  2. O’Neil, Cathy. “The Era of Blind Faith in Big Data Must End.” TED. https://www.ted.com/talks/cathy_o_neil_the_era_of_blind_faith_in_big_data_must_end?language=en.
  3. Specifically her first chapter called “A Society Searching.” Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press, 2018.

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php