Algorithms do not have agency. People write algorithms. Do not blame algorithms.
Do not blame the drones. The drones are not important. The human operators are important. The human operators of algorithms are not lion tamers.
Do not blame the drones for making you depressed. Do not blame the algorithms for blowing up towns. Oceania has not always been at war with Eastasia (Ellis, n.d.)
I am neither a data scientist nor have any background in computer science. I am educator and researcher with a keen interest in how we engage with student data, issues pertaining to privacy and increasingly, the potential and harm in algorithmic decision-making in higher education. Amidst claims and promises that algorithmic decision-making will assist higher education to make better and faster decisions about student applications, personalising student learning and assessment and increasing student retention and success, I cannot help but feel uncomfortable about the design, accountability and unintended consequences of algorithms in higher education. Reading “The black box society” by Frank Pasquale (2015), work by John Danaher (2014), Evgeny Morozov (2013) the provocation piece by Barocas, Hood and Ziewitz (2013) and the unfolding of unease with the scope and impact of the use of artificial intelligence and machine learning only strengthen my discomfort.
I also would like to acknowledge the many conversations with a colleague of mine who would often be bemused (if not irritated) by my concerns about algorithms – their reach, their design and how they shape our world. If he was to edit this blog, he would have immediate cautioned against the implication that algorithms have agency and act independent of human design and intention. Whenever I would share an article about how algorithms shape our lives, he would always state: No, it is not the algorithm; it is the person (or team) who designed the algorithm. He would emphasise that the algorithm is but the tool in the hand of the designer… If algorithms do discriminate, it is because they were designed to discriminate. If algorithms are biased, it is because the biases of their designers and developers were captured.
So the fact that algorithms increasingly shape my world, why does this make me feel so uncomfortable and uneasy?
Was I just so uncomfortable when humans used to make decisions about what I am worth, what my credit worthiness is, what my health risk profile is? Were humans less biased than algorithms? Or to what extent does the bias inherent in algorithms impact me more than when the same bias was present in my dealings with a human behind a desk? Am I just so uncomfortable with algorithms when I rely on them for the best route to a destination, find the cheapest airfare, or when I enjoy reading a book found as a result of a recommender system?
I trust algorithms when searching for a cheap airfare or the best route to avoid a traffic jam, so why am I so uncomfortable with algorithms in higher education? Can I trust them?
Ooops. I did it again. Is it not strange that it is somehow easier to grasp and deal with the impact of algorithms on our lives by subscribing human qualities to them?
Povey and Ransom (2000) found that students in the field of using technology in mathematics anthropomorphise technology as a mechanism to voice their discomfort with the seeming power struggle between technology and humanity. These authors point out that talking about technology in human terms is “an aspect of a wider contemporary discourse on the relationship between technology and society” (p. 60). They refer to the public uproar when a computer beat world chess champion Garry Kasparov:
The outcome of the match] threw some commentators into a tizzy. After all, they reasoned, how long can it be before [a computer], say, launches all the missiles in the world or gets its own late-night talk show? (People Magazine, 26 May 1997, p. 127 as quoted by Povey and Ransom, 2000, p. 60)
Does this sound familiar to the way we talk about algorithms?
Fox (2010) also explored the phenomenon of anthropomorphism and states that it “is rampant in all cultures and religions” (par.2), “ingrained in human nature” (par. 8) from the way we worship gods that resemble ourselves to how we make sense of a “largely meaningless world” (par. 16). He proposes that we “are more likely to anthropomorphise when faced with unpredictable situations or entities (par. 17). By anthropomorphising non-human actors and technology, we claim a “sense of control” (par. 18), belonging and connection. As a result we build relationships with our computers, talk about the stock market as climbing higher or flirting with higher values… (par. 29).
Specific to our anthropomorphising technology, Buchanan-Oliver, Cruz and Schroeder (2010) claim that the way we speak about technology originates from “deeply-seated anxieties toward the mythic figure of the cyborg, which has been read as monstrous, Frankensteinian icon inviting both sympathy and revulsion” (p. 636). As such talking about algorithms as having agency may resemble “technology as prosthesis” (Buchanan-Oliver et al, 2010, p. 642) or an extension of humanity (with all of our hopes, goodwill, fears, bias and hunger for power). The way we talk about algorithms may furthermore herald increasingly porous boundaries between human and posthuman where we “mutate at the rate of cockroaches, but we are cockroaches whose memories are in computers, who pilot planes and drive cars that we have conceived, although our bodies are not conceived at these speeds” (Sterlarc and Orlan quoted by Buchanan-Oliver et al, 2010, p. 644). So technology and algorithms are no longer external tools to be used by us, but have become “an intrinsic part of human subjectivity” (Buchanan-Oliver et al, 2010, p. 645).
And then there is the ever increasing threat that machines will outsmart us… (see Dockrill’s post of 11 December , 2015 – “Scientists have developed an algorithm that learns as fast as humans . That’s the tipping point right there, folks.”). Or see this collection of essays edited by John Brockman (2015) – “What to think about machines that think.”
While it is tempting to think in terms of a binary – reflecting on situations where decisions are exclusively made by humans compared to a situation where decisions are exclusively made (sic) by algorithms, the reality is much more nuanced as John Danaher in a post of June 15 (2015) points out – see the diagram below.
Image credit: Danaher (2015, June 15)
What I like about Danaher’s proposal is that it provides a more nuanced understanding of not only the different phases of data collection and use, but the way the framework relate these different phases to different combinations of human and algorithm interaction. Different combinations are possible where, for example, algorithms collect the information, but the analysis is done by either only humans, or shared with algorithms, or done by algorithms with humans supervising or done by algorithms without human supervision. (For a full discussion of the different combinations and implications, see Danaher, 2015).
Important to note is that there is possibly another layer embedded in the above diagram recognising the fact that algorithms may have been written exclusively by humans, or developed as a result of iterative cycles of artificial intelligence. Embedded and encoded in these processes are human bias and goodwill – where accountability for and the ethical implications of this mutually constitutive process resemble a ‘wicked’ problem described as “a social or cultural problem that is difficult or impossible to solve for as many as four reasons: incomplete or contradictory knowledge, the number of people and opinions involved, the large economic burden, and the interconnected nature of these problems with other problems.”
The ‘wickedness’ of understanding my discomfort with trying to make sense of algorithmic decision-making in this blog is also due to my lack of theoretical tools and academic background to fully understand how algorithms work, and secondly, to explain the intricacies of my discomfort about ‘losing control’…
Having acknowledged my possible lack of understanding, allow me then to voice in layperson’s terms my discomfort and understanding. Though I acknowledged that we should not think in terms of binaries – humans making decisions versus algorithms (created by humans) making the decisions (sic), thinking in terms of a binary gives me a handle on this slippery phenomenon.
The definition and scope/scale of the knowledge about me
In times past when humans made decisions about my credit worthiness, they most probably relied on past documents and records (on file) of my interactions with their institution, and information I provided on the prescribed application form with my signature to confirm that I told the truth. I cannot deny that my race, gender, language, and home address played (and still play) a crucial role in their decisions. Depending on who interviewed me (and in those years it was almost certain to have been a white male), my chances on being successful was fairly certain. Even today if I was to have been interviewed by a person of a different race and home language, the legacy of my whiteness may actually carry the day.
In the context of algorithmic decision making, I am not sure (actually I never know) which sources of information collected in which context of for what purpose are being used to inform the final decision. As each source of information is combined with another source, each source’s boundary of integrity collapses and the biases and assumptions that informed the collection of data in one context, are collapsed and morphed with other sources of information with their own biases and contexts. We are becoming increasingly small and vulnerable nodes in the lattice of information networks, where, like the character of ‘K’ in Franz Kafka’s The Trial, we are never told what the allegations are against us, what the sources of information are. All we are told is that “Proceedings have been instituted against you…” (Kafka, 1984, p. 9) without, ever, having access to know what they know.
[See the essay by John Danaher on issues regarding fairness in algorithmic decision-making (2015, November 5)].
The actor, algorithms and data brokers
Recently, Waddell (2015) in an interview with Phillip Rogaway (author of “The moral character of cryptographic work”) stated that “computer scientists and cryptographers occupy some of the ivory tower’s highest floors” (par. 1). The notion of the “data scientist” is emerging as an all –encapsulating title and the “hottest” job title in the 21st century (Chatfield, Shlemoon, Redublado, & Rahman, 2014:2). They have also been called “gods” (Bloor, 2012), “rock stars” (Sadkowsky, 2014), “high priests” (Dwoskin, 2014; Nielsen, 2014); “engineers of the future” (van der Aalst, 2014) and “game changers” (Chatfield, Shlemoon, Redublado, & Rahman, 2014:2).
So, can I trust them to write algorithms if the designers of algorithms don’t see their algorithms as deeply political and flowing from and perpetuating existing power relations, injustices and inequalities, or creating new ones? To what extent do they accept responsibility for the social impact of their algorithms? To what extent can they be held accountable?
In the past, when decisions were made on my financial future or my application to register or my application for health benefits, decisions were also made by humans, often with less information to their disposal than the scope of information that algorithms scrape and use to produce judgements and evaluations. These humans were not less biased, or more informed than the designers and writers of algorithms, so why am I uncomfortable with algorithms?
One possible reason is that the creators of algorithms are faceless, non-accountable, hidden in a Kafkaesque maze where algorithms feed off one another in perpetual cycles of mutation. Where I could have petitioned the human who made the decision a number of years ago, or asked to see his or her supervisor, the creators of algorithms are hidden, faceless actors who create and destroy futures with indemnity.
Do algorithm writes need a code of conduct as proposed by John Naughton (6 December, 2015)? Do we need algorithmic angels (Koponen, 2015, April 18)? Is it possible to govern algorithms, and what should be in place? (Barocas, Hood & Ziewitz, 2013).
What are our options? What are our students’ options?
What are our options when my whole life becomes a single digit (Pasquale, 2015, October 14)?
In the context of the quantification fetish in higher education where we count everything, what are the ethical implications when we reduce the complexity of our students’ lives to single digits, to data points on a distribution chart? What are the ethical implications when we then use these to allocate or withhold support to spend our resources on more ‘worthy’ candidates in the game of educational roulette? What does due process look like in a world of automated decisions (Citron and Pasquale, 2014)?
What are our options? In a general sense I think the proposal by Morozov (2013) is an excellent start. He proposes four overlapping solutions namely (1) to politicise the issue of the scope and use of algorithms; (2) learn how to sabotage the system by refusing to be tracked; (3) create “proactive digital services”; and (4) abandon preconceptions. (See the discussion of Danaher, 2014).
In the light of the asymmetrical power relationship between higher education and our students, we simply cannot ignore the need to reflect deeply on our harvesting and use of student data. When we see higher education as firstly a moral endeavour our commitment to “do no harm” implies that we should be much more transparent about our algorithms and decision-making processes.
Who will hold higher education accountable for the data we harvest and our analyses?
Among other stakeholders, we cannot ignore the role of students. They have the right to know. They have a right to know what our assumptions and understandings of their learning journeys are. They should demand that we do not assume that their digital profiles resemble their whole journey. They have a right to due process.
If only they knew.
Image credit: Image compiled from two images –
Bloor, R. (2012, December 12). Are the data scientists future CEOs? [Web log post]. Retrieved from http://insideanalysis.com/2012/12/are-the-data-scientists-future-ceos/
Buchanan-Oliver, M., Cruz, A., & Schroeder, J. E. (2010). Shaping the body and technology: Discursive implications for the strategic communication of technological brands. European Journal of Marketing, 44(5), 635-652.
Chatfield, A.T., Shlemoon, V.N., Redublado, W., & Rahman, F. (2014). Data scientists as game changers in big data environments. ACIS. Retrieved from http://www.researchgate.net/publication/268078811_Data_Scientists_as_Game_Changers_in_Big_Data_Environments
Citron, D. K., & Pasquale, F. A. (2014). The scored society: due process for automated predictions. Washington Law Review, 89, 1-33.
Dwoskin, E. (2014). Big data’s high-priests of algorithms. The Wall Street Journal, Aug, 8. Retrieved from http://tippie.uiowa.edu/management-sciences/wsj2014.pdf
Fox, D. (2010). In our own image. New Scientist, 208(2788), 32-37.
Kafka, F. (1984). The trial. Translated by Willa and Edwin Muir. London, UK: Penguin.
Nielsen, L. (2014). Unicorns among us: understanding the high priests of data science. Wickford, Rhode Island: New Street Communications.
Povey, H., & Ransom, M. (2000). Some undergraduate students’ perceptions of using technology for mathematics: Tales of resistance. International Journal of Computers for Mathematical Learning, 5(1), 47-63.
Sadkowsky, T. (2014, July 2). Data scientists: The new rock stars of the tech world. [Web log post]. Retrieved from https://www.techopedia.com/2/28526/it-business/it-careers/data-scientists-the-new-rock-stars-of-the-tech-world
van der Aalst, WM. (2014). Data scientist: The engineer of the future. In Enterprise Interoperability VI (pp. 13-26). Springer International Publishing. Retrieved from http://bpmcenter.org/wp-content/uploads/reports/2013/BPM-13-30.pdf