The Ethics of Algorithms & Data Discrimination: A Critical Reading Response
Rebecca Solnit’s River of Shadows: Eadweard Muybridge and the Technological Wild West has been on my mind as I read more about the discourse surrounding current technology and disruptive innovation. Solnit’s 2004 book is a portrait of photographer and early innovator, Eadweard Muybridge, but it is more so a critical reflection on the development of technology in the American West in the 1870s. Solnit discusses how Muybridge’s studies in motion had a seismic impact on how people perceived and acted in the world at that time. According to Solnit, Muybridge’s zoopraxiscope and motion images, alongside other technologies such as the railroad, telephone, phonograph, and telegraph, had the effect of annihilating people’s concept of time and space.
[1] The current mythologies surrounding algorithms remind me of Solnit’s description of this historic paradigm shift at the turn of the century,[2] in both there is a gap of understanding and much anxiety. In a similar fashion to the lack of knowledge which bred fear of photographic techniques ‘stealing one's soul’ or the railroad being a threat to one’s life[3] there is a mysticism attributed to algorithms, which are upheld and feared as powerful actors in a rapidly changing landscape wherein our relationships to labour, intellectual property, individual agency, privacy, and control are being disrupted and redefined.
With stealthy, creeping futurism algorithms have permeated our everyday experiences, operations, decisions, and choices previously left to humans are increasingly delegated to algorithms who advise, if not decide, how data should be interpreted and what actions should be taken.
[4] In this way, algorithms have tangible effects on our perception of reality and relations to each other, affecting decision-making processes through the determination of ‘actionable insights’ and subtly embedded contextual manipulations in marketing and social media. Common concerns with respect to the dangerous effects of not being able to comprehend black box technology and discriminatory outcomes are expressed by the authors of week nine’s readings, however, there are competing perspectives with respect to how an algorithm may be defined, and how to assign ethical responsibility in practice, i.e. can there be an unethical algorithm or is it always the operator or designer who holds the actionable responsibility? Computer scientists argue that algorithms are equations of inputs and outputs (complex in design, yet simple with respect to function); social scientists acknowledge algorithms but largely choose to discuss their ethical, philosophical, and social impacts on a non-technical level; and the greater public conceives of algorithms as inaccessible ‘black boxes’, if they are made aware of them in the first place.[5] These waters become even murkier when the implications of machine learning are considered — in such scenarios the decision-making is adaptive and uninterpretable, and the parameters of value judgement are obscured.[6]
The authors considered here focus on two central questions in their discussion of algorithms: 1. How to define algorithms and the presentation of methods for regulation and auditing their effects or ethics? (Sandvig et al., 2016; Mittelstadt et al, 2016; Gangadharan et al., 2014) and 2. How do algorithms, artificial intelligence and machine learning enable discrimination? (Nobel, 2018; Eubanks, 2014; boyd, 2014). Most of the authors agree that there is a gap in understanding between the design and implementation of algorithms and their ethical implications (Mittelstadt et al, 2016; Sandvig et al., 2014; Sandvig et al., 2016); others acknowledge that algorithms are not inherently biased but rather designed to be so by their human creators and skewed datasets (Nobel, 2018; boyd, 2014; Eubanks, 2014).
Safiya Noble is one of the most outspoken on the topics of responsibility in algorithmic design and operation, arguing in her book The Algorithms of Oppression: How Search Engines Reinforce Racism that there is a causal link between the intentions of those creating algorithms to enact their personal biases towards racism, sexism and discriminatory ethics, and the algorithms themselves.[7] Noble’s assumption is contested by Brent Daniel Mittelstadt et al., who express in their paper The ethics of algorithms: Mapping the debate that determining whether a particular problematic decision is merely a ‘one off bug’ or evidence of a systemic failure or bias may be impossible (or at least highly difficult) with poorly interpretable and predictable learning algorithms.
[8] Mittelstadt et al. provide a guide to reflect on the current debate surrounding the ethics of algorithms. Their paper is structured around the presentation of a prescriptive map of the ethics of algorithms, dividing the terrain into six connected concerns,[9] and backing up their model with a literature review and examples of each in an actionable context. Their map responds to algorithms that have two functions: 1. To turn data into evidence for a given outcome (conclusion), and 2. To trigger or motivate an action that may not be ethically neutral. Mittelstadt et al. recognize that the impact of algorithms is often latent, hard to determine, and grossly contextual. It is an admirable effort to define the ethical landscape of algorithmic use, however, I think the key to understanding anything will be found in understanding the function of the algorithms — most attempts to explain them theoretically without an analysis of any mathematical structure could be inaccurate and fall short in practice. However, this is not often possible with black box technologies and quantitative and qualitative analysis of the outputs may need to suffice.
This view is reinforced by Christian Sandvig et al.’s When the Algorithm Itself is a Racist: Diagnosing Ethical Harm in the Basic Components of Software. The authors, situated in disciplines from computer science to the humanities, call attention to the divergent understandings of algorithms. They argue that despite algorithms [complexity, secrecy, technical character or generality, one can use them as subjects of ethical and normative analysis. However, they believe it is important for all scholars to pay mind to the technical details of the innards of particular computer code of algorithms when studying their consequences.][10] Using three hypothetical algorithms, Sandvig et al. illustrate the various computational processes that could be used to consider or ignore race in decision-making. They then compare the actionable outputs of the algorithm to the ACM’s ethics code of conduct,[11] weighing in on whether it: 1. Acted honestly (virtue ethics), 2. Avoided Harm (consequentialist), and 3. Followed a predetermined rule (deontological). I find this article interesting in its attribution of human characteristics to algorithms. To Sandvig et al., algorithms have agency, ethics and even virtue — defined in this case with respect to the rigidity and consistency of response. I wonder whether it make something easier to understand if we humanize it, and if we are driven to do the same to algorithms.
Reflecting on the direct impact of algorithms on discriminatory practices in public policy, Seeta Peña Gangadharan’s Data and Discrimination: Collected Essays[12] brings together work from eighteen researchers from various backgrounds. Three themes were discussed: 1. Discovering and Responding to Harms, 2. Participation, Presence, and Politics, and 3. Fairness, Equity, and Impact. Many of the authors in this collection mention that there is a gap between public awareness of algorithms and the influence they have on their lives.[13] Virginia Eubanks’ Big Data and Human Rights reminds us that data discrimination is not new [in the United States], but rather has been used against minorities for decades.[14] Eubanks speaks to legacy system prejudice and the ‘social specs’ that underlie our decision-systems and data sifting algorithms,
and offers a number of participatory design solutions including co-design, transparency, access, and control of information.[15] Similarly, Sandvig et al.’s An Algorithm Audit provides a tangible regulatory solution to the problem of obscured algorithms by means of enacting auditing methodologies.[16]
boyd et al.’s Networked Nature of Algorithmic Discrimination builds on Eubank’s call to action and discusses another concern in algorithmic activity — that of networked associations. The authors point to the phenomenon of predictive analytics and recommendation systems which can lead to impacts in employer practices, access to resources, and prejudicial profiling. boyd et al.’s central concern is not simply that networked associations are being used but, that their occurrence is largely absent from current policy and legislation.[17] boyd et al.’s paper resonated with me with respect to contemporary hiring practices[18] and my pursuit of a summer internship or co-op. Several employer panels have been organized as part of our co-op course this term and I am surprised by the number of companies that use LinkedIn and algorithmic tools for screening. How do we navigate a landscape wherein our social media profiles and word choice are being scanned by technology to determine ‘fit’ prior to any human interaction? Eubanks asks, How can you prove a discrimination case against a computer? Can due process be violated if an automated decision-making system is simply running code?
[19] I am not convinced that it can, it would be incredibly difficult to stand trial against the verity of a computer, especially one whose mechanisms are obscured, and this will only become more difficult in the years to come.
References
boyd danah; Levy, Karen and Alice Marwick. Networked Nature of Algorithmic Discrimination in Seeta Peña Gangadharan, ed. (2014). Data and Discrimination: Collected Essays. Washington, D.C.: Open Technology Institute, New America Foundation. [e-published] https://www.newamerica.org/oti/data-discrimination/, p.53-57.
Brent Daniel Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter & Luciano Floridi. (2016). The Ethics of Algorithms: Mapping the Debate. Big Data & Society (July-December) 3 (2): p. 1-21.
Eubanks, Virginia ‘Big Data and Human Rights’ in Seeta Peña Gangadharan, ed. (2014). Data and Discrimination: Collected Essays. Washington, D.C.: Open Technology Institute, New America Foundation. [e-published] https://www.newamerica.org/oti/data-discrimination/, p.48-52.
Gangadharan, Seeta Peña ed. (2014). Data and Discrimination: Collected Essays. Washington, D.C.: Open Technology Institute, New America Foundation. [e-published] https://www.newamerica.org/oti/data-discrimination/. p.1-64.
Sandvig, Christian; Hamilton, Kevin; Karahalios, Karrie; and Cedric Langbort. ‘An Algorithm Audit’ in Gangadharan, S. P. (Ed.). (2014). Data and Discrimination: Collected Essays. Washington, DC: Open Technology Institute, New America Foundation. Retrieved from https://www.newamerica.org/oti/policy-papers/data-and-discrimination/, p.6-10.
Solnit, Rebecca. (2003). River of shadows: Eadweard Muybridge and the technological wild west. New York: Viking. P.320.
Nobel, Safiya. (2018). Algorithms of oppression: how search engines reinforce racism. New York: New York University Press.
Endnotes
[1] Rebecca Solnit. (2003). River of shadows: Eadweard Muybridge and the technological wild west. New York: Viking, p. 40.
[2] Christian Sandvig et al, referencing R. Dyer’s 1997 book White, speak of the similarities between scholars needing to train up on algorithms, and cinema historians learning about the photographic process.
in Christian Sandvig, Kevin Hamilton, Karrie Karahalios, and Cedric Langbort. (2016). When the Algorithm Itself Is a Racist: Diagnosing Ethical Harm in the Basic Components of Software. International Journal of Communication 10: [e-article] http://go.galegroup.com.myaccess.library.utoronto.ca/ps/i.do?p=AONE&u=utorontomain&id=GALE|A478974414&v=2.1&it=r&sid=summon, p.4986.
[3] At the railroad’s official opening, Kemble returned to ride with her mother, who was ‘frightened to death’ of ‘a situation which appeared to her to threaten with instant annihilation herself and all her traveling companions’”
from Rebecca Solnit. (2003). River of shadows: Eadweard Muybridge and the technological wild west. New York: Viking, p. 37.
[4] Brent Daniel Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter & Luciano Floridi. (2016). The Ethics of Algorithms: Mapping the Debate. Big Data & Society (July-December) 3(2): p.1.
[5] There is an increased awareness of algorithmic effects and manipulations in the general public, albeit wilful ignorance is arguably still the rule. This change has largely come about in the past year, with all the talk about Facebook and Cambridge Analytica (amongst other corporate offenders, such as Netflix, Uber, et al.).
[6] Brent Daniel Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter & Luciano Floridi. (2016). The Ethics of Algorithms: Mapping the Debate. Big Data & Society (July-December) 3(2): p.6; Christian Sandvig, Kevin Hamilton, Karrie Karahalios, and Cedric Langbort. (2016). When the Algorithm Itself Is a Racist: Diagnosing Ethical Harm in the Basic Components of Software. International Journal of Communication 10: [e-article] http://go.galegroup.com.myaccess.library.utoronto.ca/ps/i.do?p=AONE&u=utorontomain&id=GALE|A478974414&v=2.1&it=r&sid=summon p.4979.
[7] Noble gives the example of the audit to the Panda algorithm to remove the pornographic search results for ‘black girls’ after her published Bitch article in 2012 outing Google for executing racist search algorithms, She was able to compare the change to the racialized and sexualized results for ‘Latinas’ and ‘Asians’, which remained unchanged.
[8] Brent Daniel Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter & Luciano Floridi. (2016). The Ethics of Algorithms: Mapping the Debate. Big Data & Society (July-December) 3 (2): p. 2.
[9] Mittelstadt et al.’s six concerns with respect to algorithmic activities are: 1. Inconclusive evidence, 2. Inscrutable evidence, 3. Misguided evidence, 4. Unfair outcomes, 5. Transformative effects, and 6. Traceability.
[10] Christian Sandvig, Kevin Hamilton, Karrie Karahalios and Cedric Langbort. (2016). When the Algorithm Itself Is a Racist: Diagnosing Ethical Harm in the Basic Components of Software. International Journal of Communication 10: p. 4972. [e-article] http://go.galegroup.com.myaccess.library.utoronto.ca/ps/i.do?p=AONE&u=utorontomain&id=GALE|A478974414&v=2.1&it=r&sid=summon.
[11] Association for Computing Machinery. (n.d.). Retrieved January 7, 2019, from https://www.acm.org/.
[12] From the Open Technology Institute’s research convening at the International Communication Association’s 2014 annual meeting.
[13] Christian Sandvig et al., in particular mentions a Facebook study in which the majority of the social network users did not know algorithms were used to filter news stories, this awareness has shifted today, largely due to many revelations which occurred this year (e.g. Facebook scandals, Cambridge Analytica, wide spread data-breaches, et al.) Christian Sandvig et al., ‘An Algorithm Audit’ in Gangadharan, S. P. (Ed.). (2014). Data and Discrimination: Collected Essays. Washington, DC: Open Technology Institute, New America Foundation. Retrieved from https://www.newamerica.org/oti/policy-papers/data-and-discrimination/.
[14] Eubanks discusses the rise of digital technology use in public services in the 1970s and the design of the National Criminal Information Centre (NCIC) and New York’s Welfare Management System (WMS) in response. Virginia Eubanks ‘Big Data and Human Rights’ in Gangadharan, S. P. (Ed.). (2014). Data and Discrimination: Collected Essays. Washington, DC: Open Technology Institute, New America Foundation. Retrieved from https://www.newamerica.org/oti/policy-papers/data-and-discrimination/, p.50.
[15] These strategies might result in more effective decision-making, better matching of resources to needs, more timely feedback, and improved relationships between recipients, workers and governments
. Virginia Eubanks ‘Big Data and Human Rights’ in Gangadharan, S. P. (Ed.). (2014). Data and Discrimination: Collected Essays. Washington, DC: Open Technology Institute, New America Foundation. Retrieved from https://www.newamerica.org/oti/policy-papers/data-and-discrimination/, p.50.
[16] This regulatory proposal calls for the application of ‘audit studies’, a well used and respected social scientific method used for the detection of racial discrimination in employment and housing, using fictitious correspondence.
The authors speculate such audits could hold a similar authority to standardized consumer reports. Furthermore, they suggest a third party review body should be formed to test and audit internet platforms and provide ‘lemon warnings’ of platforms that warn of manipulation and deceptive media sites. Christian Sandvig, Kevin Hamilton, Karrie Karahalios, and Cedric Langbort ‘Algorithm Audit’ in Gangadharan, S. P. (Ed.). (2014). Data and Discrimination: Collected Essays. Washington, DC: Open Technology Institute, New America Foundation. Retrieved from https://www.newamerica.org/oti/policy-papers/data-and-discrimination/.
[17] Discussion around privacy and fairness…typically rest on the notion of individual control over information, but our networks reveal a great deal…algorithms that identify our networks, or predict our behaviours on them, pose new possibilities for discrimination and inequitable treatment…and networks are at the base of how contemporary data analytics work
danah boyd, Karen Levy, and Alice Marwick. Networked Nature of Algorithmic Discrimination in Seeta Peña Gangadharan, ed. (2014). Data and Discrimination: Collected Essays. Washington, D.C.: Open Technology Institute, New America Foundation. [e-published] https://www.newamerica.org/oti/data-discrimination/, p.55.
[18] Didn’t use the right buzzword in your list of skills (on Linkedin)? Applicants must learn to game the opaque algorithms before a person actually takes a glance at them
says danah boyd, Karen Levy, and Alice Marwick. Networked Nature of Algorithmic Discrimination in Seeta Peña Gangadharan, ed. (2014). Data and Discrimination: Collected Essays. Washington, D.C.: Open Technology Institute, New America Foundation. [e-published] https://www.newamerica.org/oti/data-discrimination/, p.55.
[19] Virginia Eubanks ‘Big Data and Human Rights’ in Seeta Peña Gangadharan, ed. (2014). Data and Discrimination: Collected Essays. Washington, D.C.: Open Technology Institute, New America Foundation. [e-published] https://www.newamerica.org/oti/data-discrimination/, p. 51.