Algorithmic decision-making in higher education: There be dragons there…


There be dragons there

Algorithms do not have agency. People write algorithms. Do not blame algorithms.

Do not blame the drones. The drones are not important. The human operators are important. The human operators of algorithms are not lion tamers.

 Do not blame the drones for making you depressed. Do not blame the algorithms for blowing up towns. Oceania has not always been at war with Eastasia (Ellis, n.d.)

I am neither a data scientist nor have any background in computer science. I am educator and researcher with a keen interest in how we engage with student data, issues pertaining to privacy and increasingly, the potential and harm in algorithmic decision-making in higher education.  Amidst claims and promises that algorithmic decision-making will assist higher education to make better and faster decisions about student applications, personalising student learning and assessment and increasing student retention and success, I cannot help but feel uncomfortable about the design, accountability and unintended consequences of algorithms in higher education. Reading “The black box society” by Frank Pasquale (2015), work by John Danaher (2014), Evgeny Morozov (2013) the provocation piece by Barocas, Hood and Ziewitz (2013) and the unfolding of unease with the scope and impact of the use of artificial intelligence and machine learning only strengthen my discomfort.

I  also would like to acknowledge the many conversations with a colleague of mine who would often be bemused (if not irritated) by my concerns about algorithms – their reach, their design and how they shape our world. If he was to edit this blog, he would have immediate cautioned against the implication that algorithms have agency and act independent of human design and intention.  Whenever I would share an article about how algorithms shape our lives, he would always state: No, it is not the algorithm; it is the person (or team) who designed the algorithm. He would emphasise that the algorithm is but the tool in the hand of the designer… If algorithms do discriminate, it is because they were designed to discriminate. If algorithms are biased, it is because the biases of their designers and developers were captured.

So the fact that algorithms increasingly shape my world, why does this make me feel so uncomfortable and uneasy?

Was I just so uncomfortable when humans used to make decisions about what I am worth, what my credit worthiness is, what my health risk profile is? Were humans less biased than algorithms? Or to what extent does the bias inherent in algorithms impact me more than when the same bias was present in my dealings with a human behind a desk? Am I just so uncomfortable with algorithms when I rely on them for the best route to a destination, find the cheapest airfare, or when I enjoy reading a book found as a result of a recommender system?

I trust algorithms when searching for a cheap airfare or the best route to avoid a traffic jam, so why am I so uncomfortable with algorithms in higher education? Can I trust them?

Ooops. I did it again. Is it not strange that it is somehow easier to grasp and deal with the impact of algorithms on our lives by subscribing human qualities to them?

Povey and Ransom (2000) found that students in the field of using technology in mathematics anthropomorphise technology as a mechanism to voice their discomfort with the seeming power struggle between technology and humanity. These authors point out that talking about technology in human terms is “an aspect of a wider contemporary discourse on the relationship between technology and society” (p. 60).  They refer to the public uproar when a computer beat world chess champion Garry Kasparov:

The outcome of the match] threw some commentators into a tizzy. After all, they reasoned, how long can it be before [a computer], say, launches all the missiles in the world or gets its own late-night talk show? (People Magazine, 26 May 1997, p. 127 as quoted by Povey and Ransom, 2000, p. 60)

Does this sound familiar to the way we talk about algorithms?

Fox (2010) also explored the phenomenon of anthropomorphism and states that it “is rampant in all cultures and religions” (par.2), “ingrained in human nature” (par. 8) from the way we worship gods that resemble ourselves to how we make sense of a “largely meaningless world” (par. 16). He proposes that we “are more likely to anthropomorphise when faced with unpredictable situations or entities (par. 17). By anthropomorphising non-human actors and technology, we claim a “sense of control” (par. 18), belonging and connection.  As a result we build relationships with our computers, talk about the stock market as climbing higher or flirting with higher values… (par. 29).

Specific to our anthropomorphising technology, Buchanan-Oliver, Cruz and Schroeder (2010) claim that the way we speak about technology originates from “deeply-seated anxieties toward the mythic figure of the cyborg, which has been read as monstrous, Frankensteinian icon inviting both sympathy and revulsion” (p. 636). As such talking about algorithms as having agency may resemble “technology as prosthesis” (Buchanan-Oliver et al, 2010, p. 642) or an extension of humanity (with all of our hopes, goodwill, fears, bias and hunger for power). The way we talk about algorithms may furthermore herald increasingly porous boundaries between human and posthuman where we “mutate at the rate of cockroaches, but we are cockroaches whose memories are in computers, who pilot planes and drive cars that we have conceived, although our bodies are not conceived at these speeds” (Sterlarc and Orlan quoted by Buchanan-Oliver et al, 2010, p. 644). So technology and algorithms are no longer external tools to be used by us, but have become “an intrinsic part of human subjectivity” (Buchanan-Oliver et al, 2010, p. 645).

And then there is the ever increasing threat that machines will outsmart us… (see Dockrill’s post of 11 December , 2015 – “Scientists have developed an algorithm that learns as fast as humans . That’s the tipping point right there, folks.”). Or see this collection of essays edited by John Brockman (2015) – “What to think about machines that think.”

While it is tempting to think in terms of a binary – reflecting on situations where decisions are exclusively made by humans compared to a situation where decisions are exclusively made (sic) by algorithms, the reality is much more nuanced as John Danaher  in a post of June 15 (2015) points out – see the diagram below.

Danaher

Image credit: Danaher (2015, June 15)

What I like about Danaher’s proposal is that it provides a more nuanced understanding of not only the different phases of data collection and use, but the way the framework relate these different phases to different combinations of human and algorithm interaction.  Different combinations are possible where, for example, algorithms collect the information, but the analysis is done by either only humans, or shared with algorithms, or done by algorithms with humans supervising or done by algorithms without human supervision. (For a full discussion of the different combinations and implications, see Danaher, 2015).

Important to note is that there is possibly another layer embedded in the above diagram recognising the fact that algorithms may have been written exclusively by humans, or developed as a result of iterative cycles of artificial intelligence.  Embedded and encoded in these processes are human bias and goodwill – where accountability for and the ethical implications of this mutually constitutive process resemble a ‘wicked’ problem described as  “a social or cultural problem that is difficult or impossible to solve for as many as four reasons: incomplete or contradictory knowledge, the number of people and opinions involved, the large economic burden, and the interconnected nature of these problems with other problems.”

The ‘wickedness’ of understanding my discomfort with trying to make sense of algorithmic decision-making in this blog is also due to my lack of theoretical tools and academic background to fully understand how algorithms work, and secondly, to explain the intricacies of my discomfort about ‘losing control’…

Having acknowledged my possible lack of understanding, allow me then to voice in layperson’s terms my discomfort and understanding. Though I acknowledged that we should not think in terms of binaries – humans making decisions versus algorithms (created by humans) making the decisions (sic), thinking in terms of a binary gives me a handle on this slippery phenomenon.

The definition and scope/scale of the knowledge about me

In times past when humans made decisions about my credit worthiness, they most probably relied on past documents and records (on file) of my interactions with their institution, and information I provided on the prescribed application form with my signature to confirm that I told the truth.  I cannot deny that my race, gender, language, and home address played (and still play) a crucial role in their decisions. Depending on who interviewed me (and in those years it was almost certain to have been a white male), my chances on being successful was fairly certain. Even today if I was to have been interviewed by a person of a different race and home language, the legacy of my whiteness may actually carry the day.

In the context of algorithmic decision making, I am not sure (actually I never know) which sources of information collected in which context of for what purpose are being used to inform the final decision. As each source of information is combined with another source, each source’s boundary of integrity collapses and the biases and assumptions that informed the collection of data in one context, are collapsed and morphed with other sources of information with their own biases and contexts.  We are becoming increasingly small and vulnerable nodes in the lattice of information networks, where, like the character of ‘K’ in Franz Kafka’s The Trial, we are never told what the allegations are against us, what the sources of information are. All we are told is that “Proceedings have been instituted against you…” (Kafka, 1984, p. 9) without, ever, having access to know what they know.

[See the essay by John Danaher on issues regarding fairness in algorithmic decision-making (2015, November 5)].

The actor, algorithms and data brokers

Recently, Waddell (2015) in an interview with Phillip Rogaway (author of “The moral character of cryptographic work”) stated that “computer scientists and cryptographers occupy some of the ivory tower’s highest floors” (par. 1). The notion of the “data scientist” is emerging as an all –encapsulating title and the “hottest” job title in the 21st century (Chatfield, Shlemoon, Redublado, & Rahman, 2014:2). They have also been called “gods” (Bloor, 2012), “rock stars” (Sadkowsky, 2014), “high priests” (Dwoskin, 2014; Nielsen, 2014); “engineers of the future” (van der Aalst, 2014) and “game changers” (Chatfield, Shlemoon, Redublado, & Rahman, 2014:2).

So, can I trust them to write algorithms if the designers of algorithms don’t see their algorithms as deeply political and flowing from and perpetuating existing power relations, injustices and inequalities, or creating new ones? To what extent do they accept responsibility for the social impact of their algorithms? To what extent can they be held accountable?

In the past, when decisions were made on my financial future or my application to register or my application for health benefits, decisions were also made by humans, often with less information to their disposal than the scope of information that algorithms scrape and use to produce judgements and evaluations. These humans were not less biased, or more informed than the designers and writers of algorithms, so why am I uncomfortable with algorithms?

One possible reason is that the creators of algorithms are faceless, non-accountable, hidden in a Kafkaesque maze where algorithms feed off one another in perpetual cycles of mutation.  Where I could have petitioned the human who made the decision a number of years ago, or asked to see his or her supervisor, the creators of algorithms are hidden, faceless actors who create and destroy futures with indemnity.

Do algorithm writes need a code of conduct as proposed by John Naughton (6 December, 2015)? Do we need algorithmic angels (Koponen, 2015, April 18)? Is it possible to govern algorithms, and what should be in place? (Barocas, Hood & Ziewitz, 2013).

What are our options? What are our students’ options?

What are our options when my whole life becomes a single digit (Pasquale, 2015, October 14)?

In the context of the quantification fetish in higher education where we count everything, what are the ethical implications when we reduce the complexity of our students’ lives to single digits, to data points on a distribution chart? What are the ethical implications when we then use these to allocate or withhold support to spend our resources on more ‘worthy’ candidates in the game of educational roulette? What does due process look like in a world of automated decisions (Citron and Pasquale, 2014)?

What are our options? In a general sense I think the proposal by Morozov (2013) is an excellent start. He proposes four overlapping solutions namely (1) to politicise the issue of the scope and use of algorithms; (2) learn how to sabotage the system by refusing to be tracked; (3) create “proactive digital services”; and (4) abandon preconceptions. (See the discussion of Danaher, 2014).

In the light of the asymmetrical power relationship between higher education and our students, we simply cannot ignore the need to reflect deeply on our harvesting and use of student data. When we see higher education as firstly a moral endeavour our commitment to “do no harm” implies that we should be much more transparent about our algorithms and decision-making processes.

Who will hold higher education accountable for the data we harvest and our analyses?

Among other stakeholders, we cannot ignore the role of students. They have the right to know. They have a right to know what our assumptions and understandings of their learning journeys are. They should demand that we do not assume that their digital profiles resemble their whole journey. They have a right to due process.

If only they knew.

Image credit: Image compiled from two images –

http://blog.wikimedia.org/2014/07/11/how-to-research-beyond-wikimetrics/

http://pic1.win4000.com/wallpaper/a/52047e1caa613.jpg

References

Bloor, R. (2012, December 12). Are the data scientists future CEOs? [Web log post]. Retrieved from http://insideanalysis.com/2012/12/are-the-data-scientists-future-ceos/

Buchanan-Oliver, M., Cruz, A., & Schroeder, J. E. (2010). Shaping the body and technology: Discursive implications for the strategic communication of technological brands. European Journal of Marketing44(5), 635-652.

Chatfield, A.T., Shlemoon, V.N., Redublado, W., & Rahman, F. (2014). Data scientists as game changers in big data environments. ACIS. Retrieved from http://www.researchgate.net/publication/268078811_Data_Scientists_as_Game_Changers_in_Big_Data_Environments

Citron, D. K., & Pasquale, F. A. (2014). The scored society: due process for automated predictions. Washington Law Review89, 1-33.

Dwoskin, E. (2014). Big data’s high-priests of algorithms. The Wall Street Journal, Aug, 8. Retrieved from http://tippie.uiowa.edu/management-sciences/wsj2014.pdf

Fox, D. (2010). In our own image. New Scientist, 208(2788), 32-37.

Kafka, F. (1984). The trial. Translated by Willa and Edwin Muir. London, UK: Penguin.

Nielsen, L. (2014). Unicorns among us: understanding the high priests of data science. Wickford, Rhode Island: New Street Communications.

Povey, H., & Ransom, M. (2000). Some undergraduate students’ perceptions of using technology for mathematics: Tales of resistance. International Journal of Computers for Mathematical Learning, 5(1), 47-63.

Sadkowsky, T. (2014, July 2). Data scientists: The new rock stars of the tech world. [Web log post]. Retrieved from https://www.techopedia.com/2/28526/it-business/it-careers/data-scientists-the-new-rock-stars-of-the-tech-world

van der Aalst, WM. (2014). Data scientist: The engineer of the future. In Enterprise Interoperability VI (pp. 13-26). Springer International Publishing. Retrieved from http://bpmcenter.org/wp-content/uploads/reports/2013/BPM-13-30.pdf

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Advertisements

About opendistanceteachingandlearning

Research professor in Open Distance and E-Learning (ODeL) at the University of South Africa (Unisa). Interested in teaching and learning in networked and open distance and e-learning environments. I blog in my personal capacity and the views expressed in the blog does not reflect or represent the views of my employer, the University of South Africa (Unisa).
This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

3 Responses to Algorithmic decision-making in higher education: There be dragons there…

  1. @kuhgirl says:

    Algorithmic accountability will be a major topic in the years to come. I have a computer science and statistics background and I think that you have captured the issues well. Unfortunately too many current and future technologists (I see this daily in my classrooms) view the IT artifacts they produce as technology-only and not as part of a broader socio-technical system. Much remains to be done in this respect. But I can also understand programmers and developers’ thinking in this regard: why should they be expected to correct processes that are already inherently flawed if their job is simply to automate it? The new mode of decision-making may sensitize us to the already embedded biases, etc. in the current processes but who must be responsible for addressing the deficiencies?

    There is also something to be said for potential benefits of using algorithms. Just last week I read an article in Inside Higher Ed on the faculty at Rutgers being unhappy about the use of analytics for their performance reviews. While there are indeed problems with a purely analytics approach in principle and the particular software package they selected in particular, I think that faculty in my school have forsaken their developmental and mentoring duties of junior colleagues miserably by simply immediately turning to articles published and disregarding all other scholarly activity. How can faculty argue that they would consistently do a superior job to an analytics solution on this basis? Then there is the personal and political aspects that come into play, the unpredictable ebb and flow of such a review meeting, and the general lack of familiarity with policies and procedures and following them correctly. To me, in this context, an analytics package that impersonally takes all my professional activities into account may be preferable, although not exclusively so. But, and here I agree with your conclusion, I would definitely want to have insight into what is evaluated and how, and how the results are interpreted and used. There needs to accountability for decisions on the value placed on different activities, for example. Not only students but also faculty have the right to know what the assumptions and understandings of our various instructional and scholarly journeys are.

    • Thanks so much for engagement with my tentative sense making of algorithmic decision-making in higher education. I am relieved that I did not make a blunder with regard to my basic understanding of algorithms and their workings!

      Your comment – “But I can also understand programmers and developers’ thinking in this regard: why should they be expected to correct processes that are already inherently flawed if their job is simply to automate it?” – made me think of the recent trial of “The accountant of Auschwitz – Oskar Gröning – (http://www.theguardian.com/world/2015/jul/15/accountant-oskar-groning-auschwitz-jailed-for-the-of-300000-jews)

      One of his defenses was – “I was only the accountant” – and something in this defense, touched a nerve. Without overestimating the scope of ethical accountability of what programmers can and should do, I sincerely think we cannot shy away from the some shared responsibility? What makes this more complex and messy in the algorithmic world, is that algorithms are interacting and ‘learning’ and who will we hold accountable?

      Your second point regarding the role of analytics in the performance management of faculty is very poignant and I think you are spot on that it can and should play a role in a bigger, more holistic assessment of what I have done and achieved (or not) during the year. Unfortunately, our management structures do not, necessarily, have time for a more nuanced understanding of my performance and reduce a very complex phenomenon into a single digit. In my own performance agreement many of the criteria are hard coded based on assumptions which do not necessarily present my performance and potential. In the broader context of the neoliberal lexicon of numbers and the dominant discourses of managerialism, analytics, the design of the metrics and their interpretation are, inherently, political.

      Thanks again for making me think further 🙂

      • @kuhgirl says:

        I definitely do not advocate entirely absolving IT professionals (a bit of a misnomer, more about this below) from ethical responsibility and as I mentioned, much remains to be done in this respect. Nevertheless, they should not be solely responsible and accountable either. And algorithmic accountability is a topic that should be of great concern but, like privacy, will likely take a few years to attract attention and action from the general public.

        Given that IT professionals are not truly professionals, that is, licensed to practice by a professional body that maintains standards and adherence to a code of ethics, there is no single accredited educational path for most IT positions. Thus the likelihood that IT professionals would have been exposed to ethical issues and educated in ethical decision-making is relatively low. This lack of true ‘professional’ status also affects liability. Professionalization of IT is unlikely for a variety of factors.

        In lieu thereof I have argued for organizations, in the US in particular, to take overall accountability and responsibility since IT workers themselves cannot be held liable in malpractice lawsuits but their employing organization can. With increased focus on governance, compliance and risk it may have some measure of success. Of course, it is a concern that compliance is not independently overseen but increased public and government scrutiny accompanied by a growing number of whistle blowers may encourage at least some improvement. Unfortunately, in the US I foresee this being a discussion of liability (for example, who is liable if a self-driving car causes an accident?) rather than integrity, as has been the case with organizations’ ethics programs in the past. Furthermore, I suspect that existing power relations will be further embedded and possibly expanded if there isn’t massive resistance and backlash from the public (hence I find your blog post quite timely and important).

        With regard to the use of analytics for performance evaluation: I think that there is little to counter the reductionist managerial thinking and numbers consistently have more influence and impact than narrative. So, if I have to choose my poison in my context, I advocate for a ‘single digit’ that is derived in a more holistic (taking more activities into consideration) and consistent (same criteria applied to all) manner, provided it is transparent. That being said, I agree wholeheartedly that the dominance of limited quantitative measures is a problem in many contexts beyond performance evaluation. Moreover, the purpose and intent of those who measure, their authority and credibility and by whom they are deemed authoritative and credible, the possible impact on those being measured, the subjective vs. objective nature of measurement, the choice of indicators, the use of quantitative and qualitative data and measures, and the possibility of and ability for precise and accurate measurement of indicators are all issues that need to be interrogated in this regard (which brings us back to your point: how is this to be done with algorithms that are generated through machine learning?). Now don’t even get me started on the use of IT in recruitment processes… At least if your performance is being measured you’ve managed to get your foot in the door.

        I’ll conclude with an extract from one of my articles: “It is not only employees but also, according to Reynolds (2007), the general pubic that has not yet realized the critical importance of ethics as it applies to IT because too much emphasis has been placed on the technical issues. IT affects fundamental rights involving access to information, copyright protection, intellectual freedom, accountability, and security. Therefore employees in the IT function are “not just building and manipulating hardware, software, and code, they are building systems that help to achieve important social functions, systems that constitute social arrangements, relationships, institutions, and values” (Johnson, 2008). As Mason (1 986: 10) stated, our moral imperative is clear: “we must insure that information technology, and the information it handles, are used to enhance the dignity of mankind [sic]” (Mason, 1 986: 11).

        Thanks again for your thought-provoking post and for your reply!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s