Algorithmic decision-making in higher education: There be dragons there…

There be dragons there

Algorithms do not have agency. People write algorithms. Do not blame algorithms.

Do not blame the drones. The drones are not important. The human operators are important. The human operators of algorithms are not lion tamers.

 Do not blame the drones for making you depressed. Do not blame the algorithms for blowing up towns. Oceania has not always been at war with Eastasia (Ellis, n.d.)

I am neither a data scientist nor have any background in computer science. I am educator and researcher with a keen interest in how we engage with student data, issues pertaining to privacy and increasingly, the potential and harm in algorithmic decision-making in higher education.  Amidst claims and promises that algorithmic decision-making will assist higher education to make better and faster decisions about student applications, personalising student learning and assessment and increasing student retention and success, I cannot help but feel uncomfortable about the design, accountability and unintended consequences of algorithms in higher education. Reading “The black box society” by Frank Pasquale (2015), work by John Danaher (2014), Evgeny Morozov (2013) the provocation piece by Barocas, Hood and Ziewitz (2013) and the unfolding of unease with the scope and impact of the use of artificial intelligence and machine learning only strengthen my discomfort.

I  also would like to acknowledge the many conversations with a colleague of mine who would often be bemused (if not irritated) by my concerns about algorithms – their reach, their design and how they shape our world. If he was to edit this blog, he would have immediate cautioned against the implication that algorithms have agency and act independent of human design and intention.  Whenever I would share an article about how algorithms shape our lives, he would always state: No, it is not the algorithm; it is the person (or team) who designed the algorithm. He would emphasise that the algorithm is but the tool in the hand of the designer… If algorithms do discriminate, it is because they were designed to discriminate. If algorithms are biased, it is because the biases of their designers and developers were captured.

So the fact that algorithms increasingly shape my world, why does this make me feel so uncomfortable and uneasy?

Was I just so uncomfortable when humans used to make decisions about what I am worth, what my credit worthiness is, what my health risk profile is? Were humans less biased than algorithms? Or to what extent does the bias inherent in algorithms impact me more than when the same bias was present in my dealings with a human behind a desk? Am I just so uncomfortable with algorithms when I rely on them for the best route to a destination, find the cheapest airfare, or when I enjoy reading a book found as a result of a recommender system?

I trust algorithms when searching for a cheap airfare or the best route to avoid a traffic jam, so why am I so uncomfortable with algorithms in higher education? Can I trust them?

Ooops. I did it again. Is it not strange that it is somehow easier to grasp and deal with the impact of algorithms on our lives by subscribing human qualities to them?

Povey and Ransom (2000) found that students in the field of using technology in mathematics anthropomorphise technology as a mechanism to voice their discomfort with the seeming power struggle between technology and humanity. These authors point out that talking about technology in human terms is “an aspect of a wider contemporary discourse on the relationship between technology and society” (p. 60).  They refer to the public uproar when a computer beat world chess champion Garry Kasparov:

The outcome of the match] threw some commentators into a tizzy. After all, they reasoned, how long can it be before [a computer], say, launches all the missiles in the world or gets its own late-night talk show? (People Magazine, 26 May 1997, p. 127 as quoted by Povey and Ransom, 2000, p. 60)

Does this sound familiar to the way we talk about algorithms?

Fox (2010) also explored the phenomenon of anthropomorphism and states that it “is rampant in all cultures and religions” (par.2), “ingrained in human nature” (par. 8) from the way we worship gods that resemble ourselves to how we make sense of a “largely meaningless world” (par. 16). He proposes that we “are more likely to anthropomorphise when faced with unpredictable situations or entities (par. 17). By anthropomorphising non-human actors and technology, we claim a “sense of control” (par. 18), belonging and connection.  As a result we build relationships with our computers, talk about the stock market as climbing higher or flirting with higher values… (par. 29).

Specific to our anthropomorphising technology, Buchanan-Oliver, Cruz and Schroeder (2010) claim that the way we speak about technology originates from “deeply-seated anxieties toward the mythic figure of the cyborg, which has been read as monstrous, Frankensteinian icon inviting both sympathy and revulsion” (p. 636). As such talking about algorithms as having agency may resemble “technology as prosthesis” (Buchanan-Oliver et al, 2010, p. 642) or an extension of humanity (with all of our hopes, goodwill, fears, bias and hunger for power). The way we talk about algorithms may furthermore herald increasingly porous boundaries between human and posthuman where we “mutate at the rate of cockroaches, but we are cockroaches whose memories are in computers, who pilot planes and drive cars that we have conceived, although our bodies are not conceived at these speeds” (Sterlarc and Orlan quoted by Buchanan-Oliver et al, 2010, p. 644). So technology and algorithms are no longer external tools to be used by us, but have become “an intrinsic part of human subjectivity” (Buchanan-Oliver et al, 2010, p. 645).

And then there is the ever increasing threat that machines will outsmart us… (see Dockrill’s post of 11 December , 2015 – “Scientists have developed an algorithm that learns as fast as humans . That’s the tipping point right there, folks.”). Or see this collection of essays edited by John Brockman (2015) – “What to think about machines that think.”

While it is tempting to think in terms of a binary – reflecting on situations where decisions are exclusively made by humans compared to a situation where decisions are exclusively made (sic) by algorithms, the reality is much more nuanced as John Danaher  in a post of June 15 (2015) points out – see the diagram below.


Image credit: Danaher (2015, June 15)

What I like about Danaher’s proposal is that it provides a more nuanced understanding of not only the different phases of data collection and use, but the way the framework relate these different phases to different combinations of human and algorithm interaction.  Different combinations are possible where, for example, algorithms collect the information, but the analysis is done by either only humans, or shared with algorithms, or done by algorithms with humans supervising or done by algorithms without human supervision. (For a full discussion of the different combinations and implications, see Danaher, 2015).

Important to note is that there is possibly another layer embedded in the above diagram recognising the fact that algorithms may have been written exclusively by humans, or developed as a result of iterative cycles of artificial intelligence.  Embedded and encoded in these processes are human bias and goodwill – where accountability for and the ethical implications of this mutually constitutive process resemble a ‘wicked’ problem described as  “a social or cultural problem that is difficult or impossible to solve for as many as four reasons: incomplete or contradictory knowledge, the number of people and opinions involved, the large economic burden, and the interconnected nature of these problems with other problems.”

The ‘wickedness’ of understanding my discomfort with trying to make sense of algorithmic decision-making in this blog is also due to my lack of theoretical tools and academic background to fully understand how algorithms work, and secondly, to explain the intricacies of my discomfort about ‘losing control’…

Having acknowledged my possible lack of understanding, allow me then to voice in layperson’s terms my discomfort and understanding. Though I acknowledged that we should not think in terms of binaries – humans making decisions versus algorithms (created by humans) making the decisions (sic), thinking in terms of a binary gives me a handle on this slippery phenomenon.

The definition and scope/scale of the knowledge about me

In times past when humans made decisions about my credit worthiness, they most probably relied on past documents and records (on file) of my interactions with their institution, and information I provided on the prescribed application form with my signature to confirm that I told the truth.  I cannot deny that my race, gender, language, and home address played (and still play) a crucial role in their decisions. Depending on who interviewed me (and in those years it was almost certain to have been a white male), my chances on being successful was fairly certain. Even today if I was to have been interviewed by a person of a different race and home language, the legacy of my whiteness may actually carry the day.

In the context of algorithmic decision making, I am not sure (actually I never know) which sources of information collected in which context of for what purpose are being used to inform the final decision. As each source of information is combined with another source, each source’s boundary of integrity collapses and the biases and assumptions that informed the collection of data in one context, are collapsed and morphed with other sources of information with their own biases and contexts.  We are becoming increasingly small and vulnerable nodes in the lattice of information networks, where, like the character of ‘K’ in Franz Kafka’s The Trial, we are never told what the allegations are against us, what the sources of information are. All we are told is that “Proceedings have been instituted against you…” (Kafka, 1984, p. 9) without, ever, having access to know what they know.

[See the essay by John Danaher on issues regarding fairness in algorithmic decision-making (2015, November 5)].

The actor, algorithms and data brokers

Recently, Waddell (2015) in an interview with Phillip Rogaway (author of “The moral character of cryptographic work”) stated that “computer scientists and cryptographers occupy some of the ivory tower’s highest floors” (par. 1). The notion of the “data scientist” is emerging as an all –encapsulating title and the “hottest” job title in the 21st century (Chatfield, Shlemoon, Redublado, & Rahman, 2014:2). They have also been called “gods” (Bloor, 2012), “rock stars” (Sadkowsky, 2014), “high priests” (Dwoskin, 2014; Nielsen, 2014); “engineers of the future” (van der Aalst, 2014) and “game changers” (Chatfield, Shlemoon, Redublado, & Rahman, 2014:2).

So, can I trust them to write algorithms if the designers of algorithms don’t see their algorithms as deeply political and flowing from and perpetuating existing power relations, injustices and inequalities, or creating new ones? To what extent do they accept responsibility for the social impact of their algorithms? To what extent can they be held accountable?

In the past, when decisions were made on my financial future or my application to register or my application for health benefits, decisions were also made by humans, often with less information to their disposal than the scope of information that algorithms scrape and use to produce judgements and evaluations. These humans were not less biased, or more informed than the designers and writers of algorithms, so why am I uncomfortable with algorithms?

One possible reason is that the creators of algorithms are faceless, non-accountable, hidden in a Kafkaesque maze where algorithms feed off one another in perpetual cycles of mutation.  Where I could have petitioned the human who made the decision a number of years ago, or asked to see his or her supervisor, the creators of algorithms are hidden, faceless actors who create and destroy futures with indemnity.

Do algorithm writes need a code of conduct as proposed by John Naughton (6 December, 2015)? Do we need algorithmic angels (Koponen, 2015, April 18)? Is it possible to govern algorithms, and what should be in place? (Barocas, Hood & Ziewitz, 2013).

What are our options? What are our students’ options?

What are our options when my whole life becomes a single digit (Pasquale, 2015, October 14)?

In the context of the quantification fetish in higher education where we count everything, what are the ethical implications when we reduce the complexity of our students’ lives to single digits, to data points on a distribution chart? What are the ethical implications when we then use these to allocate or withhold support to spend our resources on more ‘worthy’ candidates in the game of educational roulette? What does due process look like in a world of automated decisions (Citron and Pasquale, 2014)?

What are our options? In a general sense I think the proposal by Morozov (2013) is an excellent start. He proposes four overlapping solutions namely (1) to politicise the issue of the scope and use of algorithms; (2) learn how to sabotage the system by refusing to be tracked; (3) create “proactive digital services”; and (4) abandon preconceptions. (See the discussion of Danaher, 2014).

In the light of the asymmetrical power relationship between higher education and our students, we simply cannot ignore the need to reflect deeply on our harvesting and use of student data. When we see higher education as firstly a moral endeavour our commitment to “do no harm” implies that we should be much more transparent about our algorithms and decision-making processes.

Who will hold higher education accountable for the data we harvest and our analyses?

Among other stakeholders, we cannot ignore the role of students. They have the right to know. They have a right to know what our assumptions and understandings of their learning journeys are. They should demand that we do not assume that their digital profiles resemble their whole journey. They have a right to due process.

If only they knew.

Image credit: Image compiled from two images –


Bloor, R. (2012, December 12). Are the data scientists future CEOs? [Web log post]. Retrieved from

Buchanan-Oliver, M., Cruz, A., & Schroeder, J. E. (2010). Shaping the body and technology: Discursive implications for the strategic communication of technological brands. European Journal of Marketing44(5), 635-652.

Chatfield, A.T., Shlemoon, V.N., Redublado, W., & Rahman, F. (2014). Data scientists as game changers in big data environments. ACIS. Retrieved from

Citron, D. K., & Pasquale, F. A. (2014). The scored society: due process for automated predictions. Washington Law Review89, 1-33.

Dwoskin, E. (2014). Big data’s high-priests of algorithms. The Wall Street Journal, Aug, 8. Retrieved from

Fox, D. (2010). In our own image. New Scientist, 208(2788), 32-37.

Kafka, F. (1984). The trial. Translated by Willa and Edwin Muir. London, UK: Penguin.

Nielsen, L. (2014). Unicorns among us: understanding the high priests of data science. Wickford, Rhode Island: New Street Communications.

Povey, H., & Ransom, M. (2000). Some undergraduate students’ perceptions of using technology for mathematics: Tales of resistance. International Journal of Computers for Mathematical Learning, 5(1), 47-63.

Sadkowsky, T. (2014, July 2). Data scientists: The new rock stars of the tech world. [Web log post]. Retrieved from

van der Aalst, WM. (2014). Data scientist: The engineer of the future. In Enterprise Interoperability VI (pp. 13-26). Springer International Publishing. Retrieved from
















Posted in Uncategorized | Tagged , | 3 Comments

Seeing Jesus in toast: Irreverent ideas on some of the claims pertaining to learning analytics

Jesus in toast“In 2004 Diane Duyser sold a decade-old grilled cheese sandwich that bore a striking resemblance to the Virgin Mary. She got $28,000 for it on eBay” (Matter, 2014), and in 2009 Linda Lowe found an image of Jesus staring at her from a piece of toast. The phenomenon is called pareidolia and has been explained as complete natural, and all too human (Liu, Li, Feng, Li, Tian, & Lee, 2014; Tanne, 2014). Pareidolia is described as “psychological phenomenon involving a stimulus (an image or a sound) wherein the mind perceives a familiar pattern of something where none actually exist.”

In the world of data and predictive analysis, a relatively similar phenomenon is called apophenia – to see patterns where none actually exist (boyd & Crawford, 2012). In the context of higher education where we have access to ever increasing volumes, velocity and variety of student digital data, apophenia is uncomfortable companion in the analysis of student data. Harvesting and combining student data from disparate sources opens up the opportunity and risk infer relations unthinkable ten years ago.

As we engage with an ever increasing number and scope of data students we may be tempted to rush to look for patterns without considering our own assumptions and epistemologies, how we select our data, how we slice and dice, how we clean our data sets, and how we deal with the often uncomfortable silences in our data and analysis. We may be tempted to make claims on the quality and impact of student engagement based on the number of clicks, their participation in online discussion forums, and the number of downloads of resources. From this evaluation on the depth and quality of their engagement (often based on the quantification of our definition of ‘engagement’) we then design personalised assessments, curricula and the allocation of resources. Increasingly our predictive analyses also utilise the immense speed and scope of algorithmic decision destining students for a learning journey over which they have very little control or insight regarding its machinations.

It may be worthwhile to heed the words of Silver (2012) who warns that in noisy systems with underdeveloped theory – there is a real danger to mistake the noise for a signal, and not realising that the noise pollutes our data with false alarms and “setting back our ability to understand how the system really works” (p. 162). In a context where our abilities to harvest and analyse student data may outpace not only our regulatory frameworks, but more importantly, our theoretical and ethical dispositions, the latest issue of the Journal of Learning Analytics comes as a welcome relief.

In this issue the relationship between learning analytics and theory is recognised as important (“Why theory matters more than ever in the age of big data”) and this relationship is explored in contexts such as “Theory-led design of instruments and representations of learning analytics”, a “‘Beyond time-on-task: The relationship between spaced study and certification in MOOCs.” Other contributions include, inter alia, “Does seeing one another’s gaze affect group dialogue? A computational approach” and “Learning analytics to support teachers during synchronous CSCL: Balancing between overview and overload.”

As the amount, velocity and variety of student data increase, so will the noise and the potential to see patterns which either don’t exist, or patterns that do not contribute to understanding student success as the result of increasingly messy and complex, non-linear interactions between context, students and institutions at the intersection of curricula, pedagogies, assessment and institutional (in)efficiencies and operations. We therefore need to slow down our conversations (e.g. Selwyn, 2014) on the potential of learning analytics and create spaces to think critically about our epistemologies and ontologies that shape our harvesting and analyses of student data.

The plea to slow down our discussions on and embrace of the potential of learning analytics seem out of place with higher education as an increasingly privatised and/or costly commodity, characterised by an obsession on the return on investments, just-in-time products delivered by just-in-time labour aiming to get the products off the shelves in the fastest possible time. In contrast to slowing down we would like to speed up our search for signal amidst the noise. There may, however, be a danger that we may find Jesus in toast.

Let me state it clear that there is no doubt in my mind that evidence or data and the ethical harvesting and analysis of student data can and should inform the management of teaching and learning, the development of curricula, and assessment and student support strategies and interventions.My question is not whether we should harvest and analyse student data, but rather how do we engage with student data and the search for relationships that matter in the light of the fact that higher education is an open and recursive system? How do we engage with evidence of what works where the evidence does not tell us whether the intervention was appropriate and ethical?

I agree with Biesta (2007, 2010) that the issue is not about the usefulness of evidence, but rather how we define evidence, what we include and exclude, acknowledging our assumptions about data and an honesty about what the implications of our research design decisions. Current evidence-based decision making practices favour technocratic modes that assume that “the only relevant research questions are questions about the effectiveness of educational means and techniques, forgetting, among other things, that what counts as ‘effective’ crucially depends on judgments about what is educationally desirable” (Biesta, 2007, p. 5). We need to understand our limitations of our designs when we explore, in an open, semiotic and recursive system, how interventions work. We need to acknowledge our search for the “magic bullet of causality” (Biesta, 2010, p. 496). “Much talk about ‘what works’ … operates on the assumption of a mechanistic ontology that is actually the exception, not the norm in the domain of human interaction” (Biesta, 2010, p. 497).

There is a real danger that we think of and apply learning analytics as if education is a closed and isolated environment such as a laboratory setting where we can limit the amount of variables and report on those variables that made a difference. Contra to such an understanding it would be safer (and most probably closer to the reality) to think of education in terms of the Cynefin framework’s (Snowden & Boone, 2007) proposal of simple, complicated, complex and chaotic environments. In all of these four environments it is possible to harvest and analyse evidence, but with very different results. I have a strong suspicion that we think of education as simple and at most complicated, while education is, most probably, rather complex if not chaotic at times. In complicated environments, Snowden and Boone (2007) suggest that cause-and-effect relationships are discoverable but that there is more than one ‘right’ answer. In complex systems, there are no right answers and though it may be possible to trace correlation, causation becomes almost impossible to prove, and more importantly, to replicate.

At many educational conferences when I listen to reports and evidence on interventions that resulted in an increase in student retention and success, I cannot help to see Jesus smiling to me from a piece of toast.

Image credit: Adapted from


Biesta, G. (2007). Why “what works” won’t work: evidence-based practice and the democratic deficit in educational research, Educational Theory, 57(1), 1–22. DOI: 10.1111/j.1741-5446.2006.00241.x.

Biesta, G. (2010). Why ‘what works’ still won’t work: from evidence-based education to value-based education, Studies in Philosophy of Education, 29, 491–503. DOI 10.1007/s11217-010-9191-x.

boyd, D., & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, communication & society, 15(5), 662-679.

Liu, J., Li, J., Feng, L., Li, L., Tian, J., & Lee, K. (2014). Seeing Jesus in toast: Neural and behavioral correlates of face pareidolia. Cortex, 53, 60-77.

Selwyn, N. (2014). Distrusting educational technology. Critical questions for changing times. New York, NY: Routlegde.

Silver, N. 2012. The signal and the noise: Why most predictions fail – but some don’t. New York, NY: Routledge.

Snowden, D. J., & Boone, M. E. (2007). A leader’s framework for decision making. Harvard Business Review, 85(11). Retrieved from

Tanne, J. H. (2014). Seeing Jesus in a piece of toast and other scientific discoveries win Ig Nobel awards. BMJ, 349, g5764.

Posted in Uncategorized | Tagged , , , , | 3 Comments

Students’ role in learning analytics: From data-objects to collaborators


Image credit:

In most of the discourses on learning analytics students and their data are seen as mere data-points or data-objects and recipients of services. Student data are the source for many hours of enjoyment as data analysts, educators, student support staff and administrators count and correlate a range of variables such as the number of clicks, the number of downloads, and the number of attempts to pass a course. We then add gender, race, age, employment status and also physical addresses as a proxy for socio-economic income to the mix, and voila, we can now personalise their learning based on our analyses because we know what they need…

There is, however, a real danger that learning analytics may serve as prosthetics and as parasitic, supplementing and replacing authentic learning and frantically monitoring “little fragments of time and nervous energy” (Murphie, 2014, p. 19). While I would like to propose that seeing students as collaborators and participants, rather than data-points and objects, can assist in humanising learning analytics, we need to understand the frantic gathering of student data in the current context of higher education characterised by higher education institutions dancing to the tune of “evidence-based management” (see Biesta, 2007. 2010), where we ascribe to “measurement mania” (Birnbaum, 2001:197), audit practices as “rituals of verification” (Power, 1999, 2004) and the “neoliberal lexicon of numbers” (Cooper, 2014: par. 5). This is the new normal where funding follows performance instead of preceding it (Hartley, 1995), where the success of prediction and the increase in returns on investment have become survival practices in “Survivor. The higher education series.”

Considering the amount of student data higher education institutions have access to, and the fiduciary duty of higher education to address concerns about sometimes appalling rates of student failure (Slade & Prinsloo, 2013), lack of effective or appropriate student support and institutional failures – higher education cannot afford not to collect and analyse student data. Knowing more about our students raises however a number of ethical issues such as whether they know that we are observing them and analysing their online behaviours for clues to determine the allocation of resources, the need for intervention correlated with the cost of intervention and the probability that the intervention will have the necessary effect and therefore guarantee a return on investment. There are also issues related to whether they have access to their digital profiles, whether they can verify their records and provide context, whether they are protected against downstream use and who will have access to their records and for what purposes. And finally, there is also the issue with the ethics of knowing – and knowing implies responsibility. Once we know, for example, that a student from a poor neighbourhood (his/her address as proxy) has not logged on for a week, or has revealed having difficulty in coping with the course materials – we have an obligation to act. So, while there are also ethical issues in not-knowing while we could have known, we often forget the ethical responsibilities that come into play when we know…

In the broader discourses of education and specifically higher education, various practices have been imported from the medicine. Examples include institutional review boards (Carr, 2015) and the seemingly inappropriate belief in the gospel of evidence-based management (Biesta, 2007, 2010). There is also the practice of educational triage – which although practiced, has not really entered the main discourses relating to student support, learning analytics, and institutional research (Manning, 2012; Prinsloo & Slade, 2014). In the context where higher education increasingly faces changes in funding regimes and increases costs and demands, Prinsloo and Slade (2014) ponder on the question: “how do we make moral decisions when resources are (increasingly) limited?” (p. 309). As a way to engage with balancing costs with the impact and ethical dimensions of decisions educational triage provides an interesting perspective into balancing cost, care and ethics. (For a full discussion regarding educational triage as construct and practice, see Prinsloo and Slade, 2014).

Though the links, overlaps and differences between practices in the medical fraternity and higher education are acknowledged, we cannot ignore the fact that our thinking about ethics in educational research have been hugely impacted on by ethical principles, guidelines and practices in the medical fraternity.

So it was with interest when I read about the Precision Medicine Initiative (PMI) launched early in 2015 by the White House. The PMI aims to, among other things, to provide individualised health care based on collaboration between patients, medical staff and researchers, using collected and gifted data to prevent or effectively treat diseases. Amidst the hype of empowering patients and the offering of individualised care, of particular interest for me as educator and researcher in the scope and ethical implications of learning analytics is the initiative’s aim “to engage individuals as active collaborators – not just as patients or research objects.”

Excursus: In the context of learning analytics, student involvement as participants was first mentioned by Kruse & Pongsajapan (2012) and expanded by Prinsloo & Slade, 2014, 2015).

It is important to note that as far as I could establish there are two versions of the privacy and trust principles, namely 8 July and 9 November. Interesting is the critique and feedback (dated 4 August, 2015) of William W. Horton (Chair: ABA Health Law Section) of the American Bar Association on the proposal (dated 8 July).

This initiative is founded on a number of privacy and trust principles ensuring that the right to privacy and ethical research are guaranteed. The principles include issues regarding governance; transparency; participant empowerment respect for participant preferences; data sharing, access, and use; and data quality and integrity (version of 9 November). The 8 July version included a section on the assumptions that informed the principles, as well as a section on “security.” The earlier (8 July) version’s section titled “Reciprocity” is titled “Participant empowerment through access to information” in the version of 9 November.

The earlier version of the PMI (dated 8 July) acknowledges a number of assumptions (p. 2) such as:

  • “Participants will be partners in research, and their participation will be entirely voluntary”
  • “Participants will play an integral role in the cohort’s governance through direct representation on committees”
  • With regard to the variety of data sources the PMI states – “Participants will be able to voluntarily contribute diverse sources of data – including medical records, genomic data, lifestyle information, environmental data, and personal device and sensor data.”
  • The PMI is also clear that participants will have “access their own medical information.”
  • Security is addressed and the PMI guarantees– “a robust data security framework will be established to ensure that strong administrative, technical, and physical safeguards.”
  • With regard to ‘consent’ the PMI suggests that consent is dynamic, ongoing and negotiated – “Given the anticipated scope and duration of PMI, single contact consent at the time of participant enrolment will not be sufficient for building and maintaining the level of public trust we aim to achieve. A consent process that is dynamic and ongoing will better serve the initiative’s goals of transparency and active participant engagement.”

There are no reasons provided why the assumptions contained in the version of 8 July have been deleted from the final version of 9 November. As far as I could assess, outside of the issue of ‘security’ all the assumptions are sufficiently covered in the final version (9 November).

In the next section, I provide a short, and selective overview of the PMI (9 November) and specifically focus on aspects that I suspect can benefit our policies, frameworks and practices in learning analytics.

For example, under ‘Governance’ (p. 2) the PMI suggests the following:

  • Substantive participant representation – “should include substantive participant and community representation at all levels of program oversight, design, and implementation” (emphasis added). Interesting, the 8 July version enshrined active collaboration among participants with regard to governance by stating as fundamental assumption – “Participants will play an integral role in the cohort’s governance through direct representation on committees established to oversee cohort design and data collection, use, management, security, and dissemination” (p. 2; emphasis added).

Excursus: Would there be a difference between substantive and direct participation and representation? How practical would such a principle be in learning analytics?

  • The PMI further state that “Risks and potential benefits of research for families and communities should be considered in addition to risks and benefits to individuals. The potential for research conducted using PMI cohort data to lead to stigmatization or other social harms should be identified and evaluated through meaningful and ongoing engagement with relevant communities.”

Excursus: Currently there is no or very little oversight in learning analytics despite the ethical concerns and the potential of harm. Willis, Slade and Prinsloo (in press) suggests that while the prevention of harm and discrimination in research falls under the purvey of institutional review boards, there is currently no clear guidance whether learning analytics qualifies as research and therefore needs oversight by the IRB, and if learning analytics is not considered as research, how and who will oversee the ethical implications and the potential of harm and discrimination.

The PMI addresses the issue of ‘Transparency’ as follows:

  • Transparency is accepted as dynamic, ongoing and participatory. The 8 July version (p.4) stated explicitly – “To ensure participants remain adequately informed throughout participation in the cohort, information should be provided at the point of initial engagement and periodically thereafter. Information should be communicated to participants clearly and conspicuously concerning: how, when, and what information and specimens will be collected and stored; generally how their data will be used, accessed, and shared; the goals, potential benefits, and risks of participation; the types of studies for which the individual’s data may be used; the privacy and security measures that are in place to protect the participant’s data; and the participant’s ability to withdraw from the cohort at any time, with the understanding that data included in aggregate data sets or used in past studies and studies already begun cannot be withdrawn” (emphasis added). The 9 November version (p. 2) covers all of these in separate points but also add that “Communications should be culturally appropriate and use languages reflective of the diversity of the participants” (p. 3; emphasis added).

Excursus: As Prinsloo and Slade (2015) suggest, it is crucial that we think past the binaries of simply opting in or out, to a more nuanced, and continued, dynamic interaction between students as collaborators in a student-centric learning analytics at different intervals during the process. The PMI’s suggestion that participants should be involved at the “point of initial engagement and periodically thereafter” and fully informed regarding “how, when, and what information and specimens will be collected and stored; generally how their data will be used, accessed, and shared; the goals, potential benefits, and risks of participation; the types of studies for which the individual’s data may be used; the privacy and security measures that are in place to protect the participant’s data; and the participant’s ability to withdraw from the cohort at any time, with the understanding that data included in aggregate data sets or used in past studies and studies already begun cannot be withdrawn.”

  • “Participants should be promptly notified following discovery of a breach of their personal information.”

Excursus: Currently, due to the lack of oversight or regulation of learning analytics (See Willis et al, 2015) students are often left without any recourse and may not even know when there was a breach of their personal information.

Of specific interest is the comment of Horton (ABA Health Section) who suggests that there is a need to take cognizance of the OECD guidelines regarding the disclosure and protection of information with specific mention of the potential that the shared information may be used for commercial purposes, that the information may be used for non-research purposes e.g., insurers, employers, law enforcement, etc.

With regard to ‘Respecting Participant Preferences” (pp. 3-4), the 9 November version include reference to the following

  • To be “broadly inclusive, recruiting and engaging individuals and communities with varied preferences and risk tolerances concerning data collection and sharing.” Interestingly, the 8 July version also included the use of information and not only collection and sharing. What does this deletion signify?
  • Another interesting point is that the PMI (both versions) stress “participant autonomy and trust.” This is achieved through “a dynamic and ongoing consent and information sharing process” (9 November).

Excursus: What does “participant autonomy” mean in the context of the asymmetrical power relationship between medicine and patients? How autonomous can patients really make decisions in the light of not necessarily understanding the implications of withdrawal and secondly, the often different opinions on options? The PMI states that participants will be able to “re-evaluate their own preferences as data sharing, user requirements, and technology evolve” (p. 3) – and while this should be lauded, what are the implications of withdrawal?

 “Participants should be able to withdraw their consent for future research use and data sharing at any time and for any reason, with the understanding that data included in aggregate data sets or used in past studies and studies already begun cannot be withdrawn” (p. 3; emphasis added)

Interestingly, the PMI (in both versions) state that consent cannot be withdrawn in the event of studies that have already begun. Do researchers and participants know, from the onset, how long the harvesting, analysis and use of information will be? What happens if the scope changes due to new insights? Students in higher education contexts may be protected by the fact that courses have predetermined durations and that most learning analytic projects take place on course level.

With regard to withdrawal from the PMI, in the feedback provided by Horton (ABA Health Section) he suggests the “development of policies, procedures and notifications to Participants which would clarify when and how the right [to withdraw] could be exercised, distinguish between identifiable and non-identifiable personal information, and articulate the limits on the withdrawal of information already in use” (p. 10).

The principle of “Participant empowerment through access to information” (p. 4) (called ‘Reciprocity’ in the 8 July version) includes, inter alia the following:

  • “should enable participants’ access to the medical information they contribute to the PMI in consumer-friendly and innovative ways”
  • That “educational resources should be made available to participants to assist them in understanding their health information and to empower them to make informed choices about their health and wellness”

Excursus: Currently, in higher education, anecdotal evidence suggests that students do not have access to the learning analytic data that institutions have collected about them. And secondly, what I find interesting about the PMI is the commitment to make available education resources to assist participants to understanding the analyses and findings so that patients can make informed decisions.

In many of the discourses surrounding learning analytics, the emphasis is on the benefits institutions will derive from learning analytics to make choices on behalf of students. The PMI seems to turn this around and ensure that patients will be able to make the decisions affecting their health.

The feedback provided by Horton (ABA Health Section) is crucial in this regard. Horton (p. 11) raises the issue whether making the information and analysis available to patients may create “an expectation of medical intervention and/or treatment.” This raises the issue mentioned earlier in this blog whether knowing brings with it the responsibility of caring?

The second issue Horton raises with regard to this principle is whether healthcare providers have the necessary capabilities and capacities to usefully apply the PMI data. This is also pertinent in higher education context where it is not clear whether faculty or student support teams have the necessary capabilities and capacities to interpret and act on the analyses.

Horton also raises a third issue that is also pertinent when thinking about learning analytics and that is the need that “Unvalidated findings should never be communicated to participants because they may be confusing and obfuscate any meaningful application to improve healthcare” (p. 12; emphasis added). Anecdotal evidence in learning analytics suggests that often (mostly?) that there is often not time to validate the findings of a learning analytics’ project due to the often urgent need to intervene. And secondly, when we accept that education is an open and recursive system, the next student cohort will, in all probability, be different, with different needs and characteristics.

So what do we do, in the context of learning analytics, to validate findings and ensure appropriate and ethical interventions?

Under the principle of “Data Sharing, Access, and Use” (9 November, p.45) the following elements are mentioned:

  • “Certain activities should be expressly prohibited, including the sale or use of data for targeted advertising”
  • There should also be “multiple tiers of data access—from open to controlled”
  • Unauthorized re-identification and re-contact of participants will be expressly prohibited.” Interestingly the 8 July version also included “…and consequences should accompany such actions.”
  • The 8 July version had this problematic statement that was removed from the 9 November version “PMI cohort should maintain a link to participant identities in order to return appropriate information and to link participant data obtained from difference sources” (p. 5; emphasis added).

Excursus: In the current mist surrounding learning analytics, and the lack of and uncertainty regarding ethical oversight, higher education institutions will have to make very clear what activities would be strictly prohibited, e.g. the sale of data, or for targeted advertising (often by the providing institution)…In the context of “surveillance capitalism” (Zuboff, 2015) and the monetary value of data, we need to be clear on the exact boundaries of our governance of student data. There are increasing concerns about the implications of the Trans-Pacific Partnership on the exchange value of data.

There is also the issue of the potential of the re-identification of students which is currently not strictly addressed in learning analytics.

Horton raises the issue that “access [to data] should be permitted only where there are assurances that the recipient of the data has adequate protections in place to ensure the privacy of participants and confidentiality of their information” (p. 13). In the context and practice of learning analytics in higher education and the fact that oversight and ethical clearance are mostly left to the integrity of individuals accessing the information, what are the implications?

There is nothing out of the ordinary under the principles of ‘Data Quality and Integrity’ (in both versions). Interesting is the fact that the section dealing with ‘Security’ (in the 8 July version) was totally scrapped in the 9 November version. What makes this more puzzling is the fact that Horton suggested two pages of aspects to be addressed under ‘security’ before it was removed in the version of 9 November.


Despite concerns of finding parallels between research and practices in the fields of medicine and education, I read the PMI (both versions) with interest. While I am not sure the PMI addresses the complexities in the asymmetrical power relationship between medicine and medical practices and patients, the PMI does point to some issues that higher education can/should consider in order to move beyond seeing students as data objects and the providers of data (often without them knowing) to collaborators and participants.


Biesta, G. (2007). Why “what works” won’t work: evidence-based practice and the democratic deficit in educational research, Educational Theory, 57(1), 1–22. DOI: 10.1111/j.1741-5446.2006.00241.x.

Biesta, G. (2010). Why ‘what works’ still won’t work: from evidence-based education to value-based education, Studies in Philosophy of Education, 29, 491–503. DOI 10.1007/s11217-010-9191-x.

Birnbaum, R. (2001). Management fads in higher education. Where they come from, what they do, why they fail. San Francisco, CA: Jossey-Bass.

Carr, C. T. (2015). Spotlight on ethics: institutional review boards as systemic bullies. Journal of Higher Education Policy and Management, 37(1), 14-29.

Cooper, D. (2014, December 5). Taking pleasure in small numbers: How intimately are social media stats governing us? [Web log post]. Retrieved from

Hartley, D. (1995). The ‘McDonaldisation’of higher education: food for thought? Oxford Review of Education, 21(4), 409-423.

Kruse, A. & Pongsajapan, R. (2012). Student-centered learning analytics. CNDLS Thought Paper. 1-12. Retrieved from:

Manning, C. (2012, March 14). Educational triage [Web log post]. Retrieved from

Murphie, A. (2014). Auditland. PORTAL Journal of Multidisciplinary International Studies, 11(2). Retrieved from

Power, M. (1999).The audit society: Rituals of verification. 2nd edition. London, UK: Oxford University Press.

Power, M. (2004). Counting, control and calculation: Reflections on measuring and management. Human Relations, 57(6), 765-783.

Prinsloo, P., & Slade, S. (2014). Educational triage in open distance learning: Walking a moral tightrope. The International Review of Research in Open and Distributed Learning, 15(4), 306-331.

Prinsloo, P., & Slade, S. (2015, March). Student privacy self-management: implications for learning analytics. In Proceedings of the Fifth International Conference on Learning Analytics and Knowledge (pp. 83-92). ACM.

Slade, S. & Prinsloo, P. (2013). Learning analytics ethical issues and dilemmas. American Behavioral Scientist, 57(10), 1510-1529.

Willis, J.E., Slade, S., & Prinsloo, P. (in press). Ethical oversight of student data in learning analytics: a typology derived from a cross-continental, cross-institutional perspective. Submitted to a special issue of ETR&D titled “Exploring the Relationship of Ethics in Design and Learning Analytics: Implications for the Field of Instructional Design and Technology.”

Zuboff, S. (2015). Big other: surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 30(1), 75-89.


Posted in, Uncategorized | Tagged , , , , | Leave a comment

Of heresies, heretics, and the (im)possibility of hope in higher education


 Detail: Bucher Boys (1985/86) by Jane Alexander

 Abandon all hope, ye who enter here (Inferno, Dante)

 Amidst the absolute horror, fear and nausea triggered by events such as the recent attacks in #Beirut, #Paris and #Mali, and the continued sponsored and condoned violence in #Palestine and #Yemen, there is, I suspect, a deep-seated questioning of “how is all of this still possible in the 21st century?”

What happened to ‘progress’ and the belief that a better world is possible and achievable? Where does the current (and possible permanent?) disillusionment leave the belief that education is the key driver to ‘progress’ and will, per se, result in a more just and equal society? Last week a meme circulated on social media with a picture of Malala Yousafzai with the words “With guns you can kill terrorists. With education you can kill terrorism.”

I wish I could believe. But I cannot. Not that I don’t want to believe, but somehow I suspect that we overestimate the potential of education, on its own, to address generations of injustice, poverty and inequality. Call me a heretic if you want, allow me to explore the possibility that unbridled economic growth and progress is a heresy. And education, as this heresy’s servant.

Allow me then, for a brief moment of your time, to reconsider our continued and uncritical belief that humanity, progressively gets better… As conversation partner to this blog I take the work by John Gray (2002, 2004) and Zygmunt Bauman (2004, 2011, 2012). Considering the work of Gray, John Banville said that “John Gray has always been the odd-sheep-out” and John Preston called Gray a “prophet of doom.” Bauman’s work has also been up for criticism and his work characterised as full of “sombre warnings and dark judgments.” Despite these criticisms, I agree with the assessment that “”Bauman on a bad day is still far more stimulating than most contemporary social thinkers.”

In contemplating education in this interregnum (Best, 2015), allow me then to reflect on some of the points made by John Gray and Zygmunt Bauman.

Gray (2002) suggests that “The uses of knowledge will always be shifting and crooked as humans are themselves. Humans use what they know to meet their most urgent needs – even if the result is ruin” (p. 28). Regarding humanity’s belief in progress as inevitable Gay (2004) suggests that “the core of the belief in progress is that human values and goals converge in parallel with our increasing knowledge. The twentieth century shows the contrary. Human beings use the power of scientific knowledge to assert and defend the values and goals they already have. New technologies can be used to alleviate suffering and enhance freedom. They can, and will, also be used to wage war and strengthen tyranny” (p. 106; emphasis added).

Considering the advances since the Enlightenment against the backdrop of the absolute horrors of the two World Wars and the banality of evil as represented by the mushroom clouds over Hiroshima and Nagasaki, the Holocaust and the Vietnam war, one would have expected that humanity would permanently shied away from the abyss. And yet we didn’t and we still don’t.

Instead of doing everything we possibly can to steer clear of the abyss, we are “messing with forces on a grand scale” (Martin, 2006, p. 15) – on a number of levels. Amidst the many challenges facing humanity are, according to Martin (2006) environmental collapse, extreme poverty, unstoppable global migrations, non-state actors with extreme weapons, and violent religious extremism resulting in a new Dark Age.

Depending on your worldview, many suggest that higher education have unreservedly bought into the neoliberal project of globalisation as championed by the World Bank, the International Monetary Fund, the World Trade Organisation and the corporate-industrial-military complex. Economic growth is a leitmotif in curricula and is sold (often literally) as prerequisite for human progress despite evidence suggesting that “economic growth does not translate into the growth of equality” (Bauman, 2011, p. 50). Amidst the unbridled consumerism and decadent and rampant (if not rapacious) capitalism, inequalities have increased and the number of displaced people is the biggest in human history. The millions of displaced and permanently unemployed are classified as disposable, as the collateral waste of progress, those who have become permanently redundant suggest a new normal, the new, permanent “Other” (Bauman, 2004).

We live in times where “the incomprehensible has become routine” (Bauman, 2006, p. 14). As we built higher walls around our gated communities, closed our borders, and increased our entry requirements, our fears just got worst.

Fear is at its most fearsome when it is diffuse, scattered, unclear, unattached, unanchored, free floating, with no clear address or cause; when it haunts us with no visible rhyme or reason, when the menace we should be afraid of can be glimpsed everywhere but is nowhere to be seen (Bauman, 2006, p. 2)

Welcome to the 21st century.

As humanity spirals from one genocide to the next, we have increasing reason to question the gospel of Progress. John Gray (2004) state that the “belief in progress is the Prozac of the thinking classes” (p. 3). I would like to add to this, that the unquestioned belief that education, on its own, can make a difference is most probably co-prescribed with Prozac.

Gray (2004) makes the claim that “History is not an ascending spiral of human advance, or even an inch-by inch crawl to a better world. It is an unending cycle in which changing knowledge interacts with unchanging human needs. Freedom is recurrently won and lost in an alternation that includes long periods of anarchy and tyranny, and there is no reason to suppose that this cycle will ever end” (p. 3). Gray therefore contests the view that the Enlightenment set humanity on an irreversible path of progress where advances in science and technology will, per se, result in a better world. For many Gray’s statements amount to heresies, such as his claim that “The lesson of the century that has just ended is that humans use the power of science not to make a new world but to reproduce the old one – sometimes in newly hideous ways… Knowledge does not make us free” (2004, p. 6).

After the recent events in #Beirut #Paris #Yemen and #Palestine the statement by Gray that “The most striking development in politics in the past two decades is that this apocalyptic mentality has gone mainstream” (p. 10). In the light of the increasing influence of religious fundamentalism (whether in America or Iraq), terror has become “privatised” – that cannot be tolerated, but also not eliminated (2004, p. 11).

Gray (2004) furthermore states that no one cold have foreseen that “irrationality would continue to flourish alongside rapid advances in science and technology” (p. 18). Even the hope sold by Silicon valley that technology will solve all of humanity’s problems is without foundation as “[t]here is no power in the world that can ensure that technology is used only for benign purposes” (2004, p. 20). He continues:

“We are not masters of the tools we have invented. They affect our lives in ways we cannot control – and often cannot understand. The world today is a vast, unsupervised laboratory, in which a multitude of experiments are simultaneously underway” (p. 21).


“We can’t control our new technologies because we don’t really grasp the totality of their effects. And there is a deeper reason why we are not masters of our technologies: they embody dreams of which we are not conscious and hopes that we cannot bear to give up” (p. 22).

Sobering is the proposal by Gray that homo sapiens is actually homo rapiens with ambitions that are limitless, but living on an earth with resources that are irrevocably finite.

Our present way of life is more prone to disruption than most people think, and its fragility is increasing. We tend to think that as global networks widen and deepen, the world will become a safer place, but in many contexts the opposite is true. As human beings become closely interlinked, breakdowns in one part of the world spread more readily to the rest (p. 61)

In the light of the fact that democracy is seen and sold (literally) as one of the biggest (and deadliest) exports of the United States and its partners/alliances, and the claim that education should help spread the belief in one-size-fits-all type of democracy (Giroux, 2015), Gray (2004) states that “After all the babble about the irresistible spread of democracy and free markets, the reality is war, protectionism and the shifty politics of secrecy and corruption in other words, history as usual” (p. 66).

Despite the advances in science improving the lives of many, Gray (2004) states “Science cannot end the conflicts of history. It is an instrument that humans use to achieve their goals, whether winning wars or curing the sick, alleviating poverty or committing genocide” (p. 70).

So where does this leave us? How do we then teach without necessarily believing? How is hope possible in this interregnum?

A good place to start will be to acknowledge that “Knowledge is not an unmixed good; it can be as much a curse as a blessing. If the superseded science in the first half of the twentieth century could be used to wage two hideously destructive world wars, how will the vastly superior science of today be used?” (Gray, 2004, pp. 70-71). I really think that all curricula should have a warning attached to them – advising curriculum developers, instructional designers, students, and quality assurers (to mention but a few) that “knowledge is not an unmixed good”…

Is education willing to acknowledge that “the knowledge maps of the past have, to a large extent, been proven to be fragile and (possibly) the illegitimate offspring of unsavory liaisons between ideology, context and humanity’s gullibility in believing in promises of unconstrained scientific progress” (Prinsloo, 2016 – in press).

Will we teach different curricula if we believed that “history might be cyclical, not progressive, with the struggles of the earlier eras returning and being played out against a background of increased scientific knowledge and technological power” (Gray, 2004, p. 101)?

How do we help students to “read the world” (Freire, 1972, p. 120) – to recognise the metanarratives, the curricula sold-as-truth, engage with claims and counter-claims, realise (in more than one sense) their agency as constrained, entangled, fractured and possible?


Realising, at least for me, that history may be cyclical, that knowledge and advances in technology may serve evil or justice, give me a sense of purpose, if not hope. In this permanent interregnum where “the old is dying and the new cannot be born” (Gramsci, 1971, p. 110), a certain amount of morbidity and skepticism may be in order.

Image credit


Bauman, Z. (2004). Wasted lives. Modernity and its outcasts. Cambridge, UK: Polity Press.

Bauman, Z. (2011). Collateral damage. Social inequalities in a global age. Cambridge, UK: Polity Press.

Bauman, Z. (2012). On education. Conversations with Riccardo Mazzeo. Cambridge, UK: Polity Press.

Best, S. (2015). Education in the interregnum: an evaluation of Zygmunt Bauman’s liquid-turn writing on education. British Journal of Sociology of Education, 1-18.

Freire, Paulo. Pedagogy of the oppressed. Harmondsworth, UK: Penguin.

Giroux, H. A. (2015). Democracy in Crisis, the Specter of Authoritarianism, and the Future of Higher Education. Journal of Critical Scholarship on Higher Education and Student Affairs, 1(1), 7.

Gramsci, A. (1971). Selections from the Prison Notebooks of Antonio Gramsci. Edited by Q. Hoare and G. N. Smith. New York, NY: International Publishers.

Gray, J. (2002). Straw dogs. Thoughts on humans and other animals. London, UK: Granta Books.

Gray, J. (2004). Heresies. Against progress and other illusions. London, UK: Granta Books.

Prinsloo, P. (2016 – in press). Metaliteracy, networks, agency and praxis: an exploration. Chapter accepted in T. Mackey and T. Jacobson (eds.), Metaliteracy in Practice




Posted in Open reflections | Tagged , , , | 2 Comments

And then everything turned to beige… The quantified academic in an age of academic precarity

Wonderland_Walker_5[It almost feels obscene not to reflect on the events in #Beirut #Paris #Yemen (the list is endless). I am, however, permanently nauseous, speechless and saturated with claims and counter-claims and the increasing evidence that the events of the last few days, weeks, months and years are becoming the new normal. So forgive me if I don’t share my reflections at this stage. I.Just.Can’t.]

It is that time of the year again where I must report back on not necessarily what I have done or the quality of what I have done, but how much I have done… How many articles? How many chapters? How many single-authored or co-authored articles? How much money did I earn in the form of external research grants? How much am I worth? How many citations? How much did my h-index increase since I last looked and reported on? How many? How much?

My value contribution as a scholar and a researcher is being diluted to a single score on a template.

I have become a score, a number, and a single digit. Nothing more. But so much less.

And then everything turns into beige. I become a zombie. A member of the living dead.

Please understand that I don’t yearn for a romanticized past of academic freedom that (most probably) never was. As I meandered from being an administrator, to a professional to an academic to being a research professor (a journey of 20 years) I heard stories of ‘how good things were’, and ‘how things changed.’ As a fairly recent addition to the ever smaller number of faculty carrying increasingly bigger administrative tasks, workloads and participating in the dance of life and death as researcher, I can only reflect on the ‘now.’

Let me bore you with some detail.

In the beginning of the year I contract with my supervisor to deliver on a number of deliverables. As a research professor there is not much to negotiate. For example, I have four I four key performance namely – academic leadership (10%), research (70%), community engagement (15%) and academic citizenship (5%). The four performance areas are fixed, and though the percentages are negotiable (within a certain range depending on your job title); they are relatively bizarre and of very little consequence – except to play a role in the weighting of your single digit percentage in your final rating.

Let me illustrate the point: The key performance area of ‘academic citizenship’ includes my participation in academic and institutional committees, task teams, etc. This year I was the Scientific Chair for a major international conference and the amount of time I spent in meetings, reviews, and planning was much, much more than 5%. I could have increased it to 10% (the maximum) but then I would have had to steal 5% from another key performance area. Which one? And does it really matter? You only have so many hours (a point to which I will return)…

Except for the percentages allocated to each key performance area, there is also the ‘content’ of each of these key areas that are increasingly hard-coded – meaning that the definitions and criteria are predetermined, fixed and scores automatically calculated. Of my four key performance areas, two are hard-coded – academic leadership and research.

Academic leadership has the following criteria:

  • Contributions to innovative and cutting edge practices in research (carrying a 15% weighting of the allocated weight of 10%). Except for the highly problematic issues of defining ‘innovation’ and ‘cutting edge practices’ left out of the picture, is the fact that what may be innovative or cutting edge in one disciplinary field may not be appropriate in another field. How does one allocate a score to innovative and cutting edge? How innovative and cutting edge can you really be if your research application survived the horrendous ethical review process, journal editorial policies and reviewers who may have very different ideas regarding innovation and cutting-edge…
  • Successful submission of research plans of mentees to the Chair of Department (15% weighting). Score. No indication of how detailed these plans should be. No indication that a mentorship relationship is complex, layered and embedded in power.
  • Mentorship (70% weighting). Very interesting is the fact that your score is determined not only by the number of mentees, but specifically whether you can provide proof that you assisted them in applications for external funding or rating by the National Research Foundation (NRF). The quality of the mentorship is impoverished to assistance for external funding or ratings.

That’s it. That is ‘academic leadership.’ Hard-coded. Scored. Tick. Transfer score to template. Done.

Research as key performance area consists of three criteria:

  • Research outputs and successful completion of postgraduate students (80% weighting). Outputs are clearly defined, there is no doubt regarding what is regarded as an output – if it is on an approved list, tick. If you co-authored the article, half a tick. If your postgraduate student has not successfully completed his or her qualification in the period of reporting, no tick…, despite the immense amount of time, energy, blood, sweat and tears the supervisory process meant for both the supervisor and student.

It helps that you are required to report on the last 3 or 5 years as this allows for the time and different iterations involved in the publication process. What are not considered at all are your scholarly contributions in other formats, many of them increasingly peer-reviewed and public.

  • Grant applications for external funding (10% weighting). If you have evidence that you applied for external funding, you get a score of 2. If your grant was successful, you are average, a 3. If you have been successful with more than one external grant application, you get a 4 and you attain a full score if the total amount of grant money allocated to your research is in excess of 2 million ZAR.

Money talks. Money makes the world (of research) go round.

  • The third criterion is being rated by the National Research Foundation (NRF) (weighting of 10%). If the NRF rated you’re the gravitas of your research as being acknowledged on a national level (a rating of C3 on a scale of C1-3), you are allocated a score of 3

And at the end, the Excel spreadsheet tallies the scores and who I am, the quality and gravitas of my scholarly contribution becomes a number. Nothing more, so much less.

Don’t get me wrong. I don’t mind being evaluated. I don’t mind presenting evidence of what I think I am ‘worth’ as a researcher. A lot of the evidence of my standing in the field is anyway, and increasingly, public, out there already, such as comments on my blogs, remarks on Twitter, and references by other scholars. Performing my scholarship in public is an immensely risky, but also very rewarding exercise.

So how do I make sense of this? How do I manage to dance to the beat that is not of my making?

I do understand that in the context of increased internationalisation and competition, higher education increasingly sells education as a privatised and mostly costly commodity, with an emphasis on a return on investments, just-in-time products delivered by just-in-time labor aiming to get the products (aka students) off the shelves in the shortest possible time.

I do understand that the efficiency of higher education is increasingly monitored and evaluated by auditing and control processes harvesting and analysing data and evidence as/in ‘rituals of verification’ (Power, 1999, 2004). Higher education increasingly resembles “Auditland” (Murphie, 2014:10) where these ‘rituals of verification” and auditing processes beget more auditing processes in never-ending cycles that affects all learning, teaching and research (Murphie, 2014). Higher education as “Auditland” where we all spy on one another, compete for scarce resources, trying to outdo the other with providing more evidence, getting those grants, getting the invitations as keynotes, getting ahead.

I do understand that since the 1990s higher education has become increasingly a fast food factory or outlet characterised by the mantra of efficiency, quantification, calculability, predictability and control (Hartley, 1995). I do understand that changes in funding regimes resulted in the directive that “funding … follows performance rather than precedes it” (Hartley, 1995: 418). I do understand that the dominant narrative in higher education is that of a positivist, quantification fetish (Prinsloo, 2014), informed by a “neoliberal lexicon of numbers” (Cooper, 2014: par. 5), the “tyranny of numbers” and “measurement mania” (Birnbaum, 2001:197).

Despite ‘understanding’, I also see these pervasive auditing and verification rituals as mediated and mediating tools in service of evidence-based decision making that creates technical systems that simultaneously serve as prosthetics and as parasitic supplementing and replacing authentic learning and frantically monitoring “little fragments of time and nervous energy” (Murphie, 2014:19).

And amidst all of this I have become colonised as a single digit score and I become a spectator to my own drama of losing myself. And then life turns a lighter shade of beige (see the wonderful post by Kate Bowles, and Frank, 1995).

What are my options? I wish I could shout with Kate Bowles that “you don’t have my consent to use my remaining time in this way.” I know that time is irreplaceable. I know that my luck will run out, possibly sooner than later. (See the thought-provoking post by Adele Horin).

I know I cannot sustain the frantic activity, the restlessness, the panic, the dread of the next performance appraisal. I will have to make a plan.

And of course I do. I work longer. I work harder. Weekends are a non-event. The only difference between office hours on a weekend and during the week is the fact that I am (most probably) the only one in the building.

But why, you would ask? Why? Don’t I have a life?

I am 56 years old, white academic in a post apartheid South Africa where my options for finding employment outside of academia or in international higher education is zero. Don’t get me wrong. This is not about ignoring the many privileges I had and still have as a white male. I don’t subscribe to the notion of victimhood and suffering that is prevalent in the much of the current white, Afrikaner discourse. There is a vast difference between recognising the “historic burden of whiteness” and self-abasement or lame apologies (O’Hehir, 2014). My race and gender, and the socioeconomic circumstances of my family allowed me to play on a field while many others were excluded from playing. (Also see Bowler, 2014; Crosley-Corcoran, 2014; Gedye, 2014).

So I cancel a doctor’s appointment. I fit in a physiotherapist appointment in during my lunch hours (lunch?) for the unbearable pain in my neck.

Let me make it very clear that I love writing. I absolutely love doing research. I love the excitement of living on the edge of publishing, of awaiting feedback from editors on the submission of your last article. I am an adrenaline junkie. Forgive me mother for I have sinned. I say my three Hail Josephs and accept the invitation to write a chapter for a book. Imagine. They identified me as a worthy scholar and they would be honored if I would accept their invitation to contribute a chapter. Of course I would. The honor is mine. As a white African on the outside of the hallowed spaces of North Atlantic knowledge production, I am just so honored. How can I refuse? As a white male I am a neutered stray dog with no teeth in my home institution. So when I get invited to participate in an international publication, how can I refuse? Anyway, it is a sole authored chapter and, as such, worth so many points in the template during the end-of-the-year assessment.

So I graciously accept. “I would be honored.”

So I cancel breakfast on Saturday morning to be earlier in the office. The color beige is not bad at all.

Image credit:


Birnbaum, R. (2001). Management fads in higher education. Where they come from, what they do, why they fail. San Francisco, CA: Jossey-Bass.

Bowler, D. (2014, August 27). Defined by your ‘blackness.’ [Web log post]. Retrieved from

Bowles, K. (2013, November 24). Irreplaceable time. [Web log post]. Retrieved from

Bowles, K. (2014, March 5). Walking and learning. [Web log post]. Retrieved from

Cooper, D. (2014, December 5). Taking pleasure in small numbers: How intimately are social media stats governing us? [Web log post]. Retrieved from

Crosley-Corcoran, G. (2014, August 5). Explaining white privilege to a broke white person. [Web log post]. Retrieved from

Frank, A.W. (1995). The wounded storyteller. Body, illness, and ethics. London, UK: The University of Chicago Press, Ltd.

Gedye, L. (2014, October 13). Jou past se poes. The Con. [Web log post]. Retrieved from

Hartley, D. (1995). The ‘McDonaldisation’of higher education: food for thought? Oxford Review of Education, 21(4), 409-423.

Horin, A. (2015, November 16). Dear reader, my luck has run out. The Age. Retrieved from

Murphie, A. (2014). Auditland. PORTAL Journal of Multidisciplinary International Studies, 11(2). Retrieved from

O’Hehir, A. (2014, August 30). Why acknowledging white privileged is not surrendering to ‘white guilt.’ [Web log post]. Retried from

Power, M. (1999).The audit society: Rituals of verification. 2nd edition. London, UK: Oxford University Press.

Power, M. (2004). Counting, control and calculation: Reflections on measuring and management. Human Relations, 57(6), 765-783.

Prinsloo, P. (2014, October 22). Mene, mene, tekel, upharsin: researcher identity and performance. Inaugural lecture at the University of South Africa. Retrieved from

Posted in | Tagged , , , , , , | 6 Comments

Troubling open education: from ‘Fauxpen’ to open

Troubling openThe third keynote at the recently held ICDE2015 conference was Laura Czerniewicz, one of the most critical and informed scholars in higher education on the African continent, and a critical voice in international higher education, knowledge production and dissemination. The title of her presentation was “Troubling open education” (see this write-up by University World News). Against the backdrop of the hype, the claims and counter-claims regarding ‘open education’ there is an urgent need that we should critically engage with the different nuances between faux or false openness and openness as an emerging (and defiant) reclamation of the commons…

[In this reflection I tried to make sense of some of the central issues and arguments raised by Laura in her keynote. This blog is neither an attempt to provide a transcript of her keynote, nor a detailed summary. This is my attempt to make sense, to come to terms with and get entangled in the messiness of the notion of ‘open education.’]

In her keynote Laura referred to some of the general “troubles” in the broader landscape of higher education and specifically the role of open education in solving (or exacerbating) these troubles. While many of the discourses in current higher and open education use  “open” as if we all agree on its scope and meanings, Laura proposed that the notion of openness in education is confused and (should be) contested. In the general parlance ‘open’ often means ‘free’, ‘open licensing’, ‘legal openness’ and/or ‘digital’. Somewhere between these meanings,  openness as a ‘celebration of the common’ became and continues to be lost. Laura therefore mooted the need to trouble (as verb) the notion of open education and “reclaim the core of open education.”

The current debates, claims and counter-claims regarding open education can be understood in the context of some of references to the different ‘ages’ as found in the general discourses in higher education such as the ages of austerity, unbundling, inequality and abundance. Though these ‘ages’ or discourses are often mutually constitutive or exclusionary, they exist simultaneously (often in one institution…) and shape the meanings and understandings of ‘open.’ It is therefore not strange to find references to the ‘age of austerity’ functioning in some weird way as supporting the “age of abundance’ and ‘sharing’ – I point I will return to later.

Referring to the fact that “two thirds of OECD countries decreased the proportion of expenditure devoted to education between 2005 and 2011” and that “more than half of developing countries reduced spending on education between 2008 and 2012”, it is clear that things have changed. In response to the new funding regimes and cost savings, we encounter the “wonderful euphemism called ‘cost sharing’ which means we share the costs with the students.” As government subsidies and funding regimes changed the cost of education and tuition increased exponentially, the cost of books increased, the costs of houses decreased, and the consumer price index plateaued.

Laura shared two examples (of research done in 2009) of the cost of books between South Africa and the USA where, for example “A long walk to freedom” would cost $12.10 in the USA while costing $24.30 in South Africa. The Oxford English Dictionary costs $21.50 in the USA and in South Africa $47,00. The projected cost in the USA at South African proportions of income for the two books are $259.77 and $504.50 respectively!

Non-traditional students now comprise the majority of students in higher education questioning many of our assumptions about the ‘traditional’ or “average” student. Inequality is pervasive and the World Economic Forum describes the increasing levels of inequality as one of the top trends in 2014. In this somewhat notorious league table, South Africa is at the top – where the top richest people in South Africa have the equivalent of the wealth of 50% of the population.

But surely technology can make a difference? Referring to the promise of technology, Laura stated that although mobile phones are ubiquitous, the immense cost of data in the South African context seriously hampers the optimal use of mobile technologies in education. In developed countries, 96% of the population can afford data, while in developing countries only 21% of the population can afford buying data. We should therefore understand technology as “a cause, a consequence and a mediator of higher education change.”

We also need to understand the conversation about the meaning of ‘open’ in the broader context of knowledge production and dissemination that is increasingly privatized. We cannot ignore the commodification, the corporatisation or McDonaldisation and franchising of knowledge production and dissemination where North-Atlantic knowledge is “Cocolonising” the rest of the globe. The flattening or universalisation of knowledge functions as code for the naturalisation of American knowledge as the only ‘real’ knowledge, or for that matter, ‘truth.’

Against this backdrop, ‘open education’ seems like a great solution due to the assumed/presumed values of open education and assumptions about sharable knowledge and enabling technology. Education and specifically open education is presented as “an unprecedented public good” (Budapest Open Access Initiative) and the Cape Town Open Education declaration, focusing on OER, celebrates the open, collaborative and free possibilities of technology. Open education therefore seems to be a coherent response to troubles faces higher education.

So, can open education actually deliver on the hype and the promise? The answer is not as straightforward as we would like it to be. Laura referred to the South African saying – “ja-nee” – meaning a concurrent yes/no … Open education is a site of confusion, conflation and serious contestation and not helped by the flood of opens.

Our uses of ‘open’ are context, discipline and most probably rhetoric specific… The fact that we use the same word is not necessarily advantageous. Even in the educational arena the notion of ‘open” means different things for different people – open educational resources, open education practices, open courses, open content. We therefore need to unpick the different nuances of ‘openness’ and think in terms of degrees of openness, continuums of social openness. We should therefore never forget that openness always happens in particular historical and local contexts.

Defining and understanding ‘open’ is not simple at all. Laura referred to Chris Jones who said “openness is not reducible to a simple definition because it is a complex assemblage of social, political, and technological elements developed over time. It is variable and nuanced.” Openness in education is furthermore always relational and always exists in relation to closure. It is always relative and on a continuum. It is permeable. And, wait for it, openness is not necessarily positive. Openness has a shadow side and it is not necessarily intrinsically afforded by technology.

In the context of claims that ‘open’ means ‘free’ Laura pointed to the claims that ‘open education’ would mean the “the end of content scarcity.” ‘Open’ also means, for many, digital and low cost. It is true that the costs of reproduction of digital content are low and that there is not degradation in digital goods. There is also the reality of the explosion of user-generated content. Referring to the assumption that “digital = open = free”, Laura suggested that going “digital affords ‘open’ but it also affords closed” and “analogue has affordances that can be more open than digital.”

We need to remember that when we move to digital we move from the tangible to the intangible, we move from ownership to license, we move into an arena of digital rights management… We need to remember that it is not the technology that decides, but it is people who make the rules, write the algorithms, establish and enforce the licensing agreements. When Kindle removed copies of “1984” from Kindle readers’ devices they reminded their customers that they don’t own the copies of books they bought…

We need to look at new ecologies of access where there are continuums between legal and illegal and between analogue and digital. Open content is only one option in this new ecology. Examples of other options are “piracy cultures” (building media relationships outside those institutionalized sets of rules) and the “affective economy” (referring to the feelings of goodwill when files are shared, embellished, remixed and re-shared). Referring to the work by Castells and Cardosa (2012) – Laura pointed to the fact that significant parts of the world’s population “is building mediation through alternative channels of obtaining content” (emphasis added). Piracy cultures have become an essential characteristic of the networked society. Piracy has become the norm. This is linked to the central role of the “informal” as the “quiet encroachment” which sees the blurring of boundaries and the breaking of rules “as the norm.” Most digital books are pirated, more than 75% of prescribed text books are available as pirated e-books and a huge percentage of E-book readers download their books illegally.

This is not a developing country issue, but all happens across contexts, all over the world.

In research done at University of Cape Town (the ROER4D project), researchers found that many teachers assume that because a resource is digital it also means it is an OER. Students often also do not know the difference between legal and illegal resources, and feel that they had a right to educational and scholarly resources. Students feel that they have a moral right to educational resources and that withholding access to educational resources through licensing or copyright arrangements is unethical. For many students sharing and pirating sources “is the right thing to do.”

Laura therefore referred to the notion of “Fauxpen”, as combination between faux or false on the one hand, and on the other hand, ‘real’ openness. Central to understanding the dilemma is the elephant in the room, namely ‘copyright.’

Most of the discourses about copyright are framed in terms of serious criminality but Laura proposes that copyright itself needs rethinking. In a digital and digitized world, copying and sharing are integral characteristics of how computers work and therefore beyond our current copyright frameworks. The history of copyright illustrates that copyright was always regarded as a necessary evil where the rich did not need to earn money but authors and artists needed to make money through their works. The public domain was always the goal of copyright, and copyright was protected for the shortest period of time.

In stark contrast to these early beginnings, knowledge is currently in the vice grip of commerce and is being increasingly commoditised, enclosed and commercialised. We cannot and should not underestimate the implications. The foundations of an informed and democratic society may be at risk. Laura therefore claimed that “The intellectual property frameworks which shape higher education engagement with knowledge are anachronistic and outdated, out of sync with the urgent needs of a digitally-mediated and extremely unequal world.”

In this context Laura referred to the Trans Pacific Partnership – a current multinational trade negotiation agreement that would see the current 50 year copyright term being increased to 70 years. The agreement will also target whistleblowers and journalists, increase the liability for Internet intermediaries, and adopt heavier criminal sanctions. A counter-narrative is “Bound by law” – asking questions how creativity can flourish in an increasingly controlled and policed environment, how we can balance private and public spaces, and how we can achieve sustainable development for creativity and public access for everyone to use in the intellectual property space.

Laura made an impassioned plea that we should realise the profound importance of these changes for us as educators and we need to be building alliances with public interest lawyers. We need to find them and work with them. We need to (re)look at our intellectual property contracts for academics at universities. The notion of ‘sharing’, in stark contrast to many of the current regimes and practices, is such a positive word. Laura referred to David Wiley who said that “without sharing there is no education.” Notions such as ‘sharing’ and ‘generosity’ are currently being appropriated by commercial entities and celebrated the “Überisation of education

Open cannot ignore those who cannot afford to participate in these ‘open’ spaces. We cannot ignore those who are currently excluded and who will, in the context of the commercialisation of “open”, continue to be excluded. Laura warned against the fact that “Sharewashing” has become the new Greenwashing, perpetuating a world of make-belief and pretense, covered by a veneer of openness.

What does “reclaiming the commons” mean for educators? What can ‘openness’ mean for educators and higher education management? Laura suggested, in closing that the role of the university is to –

  • Assert academics and authors as the agents and owners of knowledge
  • Protect the autonomous university at the heart of commons-based structures for knowledge
  • Develop and support collaborative initiatives in knowledge dissemination

We should never forget that open education is a means to an end. As such open education can and should an important strategy towards an equitable, democratic and peaceful world.

Posted in | Tagged , , , , | 1 Comment

Resisting techno-solutionism, resisting the Pied Piper, reclaiming the story: A personal reflection on Audrey Watter’s keynote at ICDE2015

Listening to Audrey Watters at the recently held ICDE2015 in which she provided a radical and critical interrogation of the Silicon Valley narrative, I could not help but think about the legend/myth of the Pied Piper. It is said that in 1284 the town of Hamelin was suffering from a rat infestation. As if called, a piper, claiming to be a rat-catcher, appeared. With the mayor being at the end of his wits with the plague, the piper promised to rid the town of the rats for a sum of 1000 guilders. After the mayor agreed, the piper fulfilled his side of the contract by luring the rats into the river by playing on his pipe. On return to claim his payment for services rendered, the mayor reneged on his promise and only paid half of the promised sum. The piper swore to take revenge, left the town in anger only to return later. While the villagers were in church, the piper lured the town’s children into a cave by playing on his pipe. The children were never seen again. According to the legend, three children survived the ordeal – a lame child that could not follow quickly enough, a deaf child that did not hear the music, and a blind child that was unable to see where the rest of the children were going.

In my own version of the story I would imagine there was a fourth child that survived the ordeal – a girl with the name of Audrey – who recognized the piper for who he was and like a modern-day Cassandra warned the children not to follow the Pied Piper but the children did not believe her…

It is difficult to underestimate the impact the work of Audrey Watters had on my personal understanding of #edtech in the context of higher education. Audrey’s passion, incisive analysis and critical scholarship is a breathe of fresh air in the often incestuous discourses on technology, the latest disruption, the latest claims and products of venture capitalists and the captive gaze of technology as Medusa on higher education management and governments.

It was therefore an immense privilege for Audrey to have been invited as the second keynote to ICDE2015. The title of her keynote was “Technology imperialism, the Californian ideology, and the future of Higher education” focusing on “the future of edtech as imagined and narrated by Silicon Valley.” Her keynote was especially important in the context of education on the African continent where “Africa” is often the code for ‘new markets’ in the narratives of Silicon Valley. Examples include “Facebook satellite to beam internet to remote regions in Africa”, “Facebook’s Mark Zuckerberg Invests $10M In E. Africa’s Low Cost Private Schools Firm”, Mark Zuckerberg launching in Ghana, and “Zuckerberg: Facebook’s mission is to ‘connect the world.”

Despite being very critical, (if not skeptical) about the different claims Silicon Valley (and specifically Mark Zuckerberg) makes with regard to education, Audrey agrees with the sentiments expressed by Neil Selwyn (2014) – “While undoubtedly of great potential benefit, it is clear that educational technology is a value-laden site of profound struggle that some people benefit more than others – most probably in terms of power and profit” (p. 2). What makes Audrey and Neil’s voices so important is the fact that many (most?) academics and managers in higher education have an apparent “blind spot” for the ideological, political and economic nature and even possibly basis of educational technology. We are sleepwalking through our mediations with technology – a phenomenon called “technological somnambulism” (Winner, 2004, in Selwyn, 2014, p. 3). [Or blindly following the Silicon Valley Pied Piper].

It is easy to think of educational technology just in terms of platforms and tools we need to understand educational technology “as a knot of social, political, economic and cultural agendas that are riddled with complications, contradictions and conflicts” (Selwyn, 2014, p. 6).

Scholars like Watters and Selwyn may be accused of being over-critical or even pessimistic. In defense of them it can be stated that “there is little to be gained from maintaining a Pollyannaish stance towards technology use in education”, especially in the light of the fact that educational technology is not bringing about the changes and transformations that many people would like to believe” (Selwyn, 2014, p. 15). Selwyn (2014) actually suggests a “purposeful pursuit of pessimism” (p. 15) – not as resignation to a fate determined by others, but “as an active engagement with continuous alternatives” (p. 16). In doing so the intention is to “slow down the pace of our discussions in the face of fast-moving, rapidly changing and often ephemeral nature” of educational technology (p. 17).

In her keynote, Audrey, so to speak, unmasked the Pied Piper as the captains of technology and more specifically educational technologies in Silicon Valley. So why Silicon Valley?

What became the Internet originated in California in 1969 and since then the discourses about the Internet were imagined, shaped and told by Silicon Valley. The main tenets of this narrative are how the Internet and access to the Internet will benefit all, how access to the Internet is a human right – but not in the sense of having freedom of expression, or freedom of association, or a passion for social justice, or resulting in a more equitable and just society. Connectivity, and access to the Internet is, according to Audrey, Silicon Valley’s shorthand for new markets as well as the extraction and monetization of personal data…

As such Silicon Valley presents the Internet as master narrative in which the Internet will change and fix everything. Audrey pointed how Mark Zuckerberg’s Facebook is, for many, equivalent to the Internet (not that Mark minds). In fact, Mark founded – a technological octopus comprising of a partnership between Facebook and six telecommunications and technology role-players. Facebook is the Internet. Facebook wants to be the Internet for everyone. At least, for the poor. And not the whole Internet – but selected services…

Why does this matter? Why should we be worried? Why should we pause for a moment and exit the crowd of excited governments and education institutions dancing to the tune of the Pied Piper?

We need to ask a number of critical questions such as – Who controls the network? Who controls [and have access to] the data? Who controls the servers? Who controls the software? We need to expose, that behind the philanthropy and hype, there are new markets, captive audiences, consumers, customer and data. Big Data. Enter surveillance capitalism. Big time. (See Danaher, 2015; Fuchs, 2010). Enter “algorithmic personalization” (Koponen, 2015) and a “scored society” (Citron and Pasquale, 2014; Pasquale, 2015).

We presume that the Internet is neutral and we forget that the inherent political, economical and cultural infrastructure of the Internet. In selling the Internet to African governments and educational institutions we forget (at our own peril) that the Internet (aka Facebook) does not share or serve our commitment to social justice and less inequality…

Silicon Valley is a mindset, not a location. This mindset celebrates heroes, inventors, smart risk taking, and of course, being white and male. Silicon-speak includes words such as innovation, disruption, and destruction. It privileges “new”, and everything it deems as old is doomed to be classified as obsolete. Everything is perpetually in need of an upgrade. “The Silicon Valley narrative has no memory, no history, unless it inverts one to suit its own purposes. “It celebrates the individual at all costs and calls this ‘personalization.’

The Silicon Valley narrative does not neatly co-exist with public education – and we forget this at our own peril.

Audrey asked: “What it Edtech was supportive and not exploitative? Open instead of foreclosed? About rethinking teaching and learning and not simply about expanding markets? What if Edtech was about meeting individual, institutional and community goals?” But educational technologies as designed, narrated and sold by Silicon Valley believes that education is broken and that technology (aka Mark Zuckerberg et al) can and will fix it. Enter the Pied Piper to rid the city of the rats.

Audrey also referred to the work by Neil Selwyn who said that technology is “a site of social struggle through which hegemonic positions are developed, legitimated, replicated and challenged.” We forget technology’s roles in and connections to neoliberalism and neo-imperialism, global capitalism and the Empire. The Internet and ‘being connected’ equates to being “hip and rich” ignoring race labor, gender and structural inequalities and white supremacy. The Internet as designed and narrated by Silicon Valley does not serve justice and equity but serves venture capital and is imperialisms latest form…

It is our responsibility to push back and to resist. It is not inevitable that we must follow the Pied Piper. We can resist, not because we do not want things to change… We must resist in the name of freedom and justice and not in the name of wealthy white men looking for new markets. “Silicon Valley does not have to be our dream machine. We can do better. And we must.”

Scholars like Audrey Watters and Neil Selwyn are, in the words of Popkewitz (1987, p. 350) doing “critical intellectual work.” “Being ‘critical’ … implies taking a skeptical view of the claims surrounding educational technology in terms of fairness and efficiency, and rejecting the notion that this is an inevitable process that is beyond challenge or change” (Selwyn, 2014, p. 12).

Audrey Watters, in delivering her keynote at ICDE2015 – was not a young girl running alongside the throngs of kids following the Pied Piper. She is a formidable scholar. We ignore her at our own peril.

Image credit

“Study for the Pied Piper of Hamelin – The children” (c. 1871) by George John Pinwell


Citron, D. K., & Pasquale, F. A. (2014). The scored society: due process for automated predictions. Washington Law Review, 89, 1-33.

Koponen, J. M. (2015, June 25). The future of algorithmic personalization. [Web log post]. Retrieved from

Pasquale, F. A. (2015, October 14). Scores of scores: How companies are reducing consumers to single numbers. The Atlantic. Retrieved from

Popkewitz, T. (1987). Critical studies in teacher education: Its folklore, theory and practice. Brighton, UK: Falmer Press.

Fuchs, C. (2010). Web 2.0, prosumption, and surveillance. Surveillance & Society, 8(3), 288-309.

Selwyn, N. (2014). Distrusting educational technology. Critical questions for changing times. London, UK: Routledge.

Posted in | Tagged , , , | 1 Comment