The problem of gaps, biases and barriers

In discussions about equity in the web and in technology I often come across two terms: Gaps and Biases. Sometimes they are accompanied by barriers. It is said that, to achieve equity, we need to close gaps, correct biases and remove barriers.

a google ngram search for "Gaps and Biases". The Graph starts 1940 at 0% and raises up to 0.0000001% in 2019

“Gaps and Biases” had quite a career since the 1940, at least according to the Google n-gram corpus.

Like any metaphor, gaps, biases and barriers highlight some aspect and tone down others. In this case it emphasizes problems that can seemingly be clearly defined and alleviated, while aspects that are political, messy and systemic are made less noticeable.

The existence of gaps, biases and barriers all imply that there is an underlying and desirable world that is free of gaps, biases and barriers. Each of the metaphors relates to this underlying world in a different way:

  • Gaps imply that there is a distance between the current state and the desired state that is to be overcome. If the gap is closed, the correct state is reached.
  • Biases imply a failure of rationality. People can be biased in their thinking; data sets can be an improper, skewed representation of the world. The correct state is one that is not skewed but represents and thinks about the world as it is.
  • Barriers imply that there is an obstruction of a (preexisting) path. The correct state is one where the barrier is removed; the path could be used again.

All these metaphors assume not only an undesirable state and a preferred one, but also that the preferred state is preexisting. The undesirable states are accidentally or willfully wrong versions of the correct state:

  • The status quo has its gaps not to the many possible preferable states but has one gap to the correct state;
  • Bias is in contrast to the rational, correct way
  • The barrier is blocking the preexisting (correct) path.

If the metaphors would suggest preferred it would imply that a preferred state needs to be choosen, making the process also a political question. However, the metaphors of gaps, biases and barriers imply an underlying ideal state, thus the problems are not a political failure but a failure to recognize the ideal, correct state and to act upon that knowledge.

The framing problems as being due to gaps, biases and barriers is compatible with the libertarian ideas in cyberculture 1 : It directs away from political action and messy negotiations towards a problem that is defined by referring a platonic correct state 2 and which can be rationally defined and solved.

This rationalist idea of ethics is analyzed by Abeba Birhane in her article “Algorithmic injustice: a relational ethics approach” 3, in which she focuses on ethics in data science and algorithmic decision making. She points out that “Any data scientist working to automate issues of a social nature, in effect, is engaged in making moral and ethical decisions” 3 and that “In a supposedly objective worldview, bias, injustice, and discrimination are (mis)conceived as being able to be permanently corrected. The common phrase “bias in, bias out” captures this deeply ingrained reductive thinking.” 3

These are the ethics matching the metaphors of gaps, biases and barriers. However, we could move away from such metaphors. As Birhane suggests, "harm", "injustice" and "oppression" might be better. While she focuses on algorithmic justice and AI, I think these suggestions apply well to many other fields in which gaps, biases and barriers are used to frame problems.


  • 2021-04-05 Update: One sentence was a mess, so I rewrote it.

  1. The classic text here is “The Californian Ideology” 4. The turn away from formal politics towards harmonious cybernetical systems is described in Turner’s From Counterculture to Cyberculture5

  2. Phil Agre described the practices of formalization in AI research and its relation to platonism in “The Practical Logic of Computer Work.” 6 (also availiable on Agre’s website). Similar practices are also discussed in Chapter two of Bowker’s “Memory Practices in the Sciences” 7 

  3. Birhane, Abeba. „Algorithmic Injustice: A Relational Ethics Approach“. Patterns 2, Nr. 2 (12. Februar 2021): 100205.

  4. Barbrook, Richard, und Andy Cameron. „The californian ideology“. Science as Culture 6, Nr. 1 (1996): 44–72. 

  5. Turner, Fred. From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism. 1st edition. Chicago: University of Chicago Press, 2006. 

  6. Agre, Philip E. “The Practical Logic of Computer Work.” In Computationalism, edited by Matthias Scheutz, 129–42. 

  7. Bowker, Geoffrey C. Memory Practices in the Sciences. 1. Inside Technology. Cambridge, Mass.: MIT, 2008.