The (not so) secret life of a networked and networking scholar


[ImaParkge credit: https://pixabay.com/static/uploads/photo/2016/05/01/20/12/swing-1365713_960_720.jpg ]

Not a day passes or there is not another blog or article about the creeping commercialisation and surveillance on Twitter and Facebook. No matter how often I would check my privacy settings on both of these social networking platforms, it would seem as if there is no way to stay ahead of changes (often without notification), scams, surveillance or an alert shared by another user.  In the light of increasing concerns and discomfort among many academic users of these platforms, I continuously re-assess my own use and online practices, and increasingly have to defend my (continued) use…

So why am I still (for now) using these platforms despite many others opting out?

Allow me to share my current sense-making of what these two social networking platforms mean for me as an individual, as activist, as scholar and researcher…

Let me start with Twitter…

I discovered Twitter when I attended the ALT-C conference in Manchester in 2009. I remember sitting in the audience listening to debates on questions such as “Is the LMS dead?” …  Twitter was all the rage at the conference with many sharing stories and anecdotal evidence of their own practices and how Twitter enriched their teaching. I created a Twitter profile, tried to develop a sense or vision for my own practice but I really found it hard going. It just did not make sense, at first. I struggled to find my own voice, my own practice. I remember stressing about not having something ‘original’ to tweet, and my early attempts at originality disappeared in the forest where no-one hears when a leave is falling. But I kept going, slowly but surely building up a network of scholars in the field that I followed, with a lesser amount of scholars who followed me back. I mean, what can someone from a relatively obscure university in darkest Africa really contribute to the network of knowledge production and dissemination? My insecurities on being accepted in the Twitter network as having something to say or contribute showed an eerie resemblance to my insecurities and inability to play the field in a transforming higher education sector.

Then in early in 2012, my Twitter account was hacked. I clicked on a link in a direct message with the tempting message “Look what video of you I found on the Internet” – or something as obscure and possibly embarrassing as this. Almost immediately my Twitter feed was full of angry followers who asked me to stop sending them direct messages. No matter what I did, there was no-way out of this Kafkaesque nightmare. Changing passwords did not help so I committed hara-kiri – took one for the team.  I started over. New profile name. New passwords. No followers.

Twitter provided and still provides me with access to a network of thinking and exposure to ideas that I did not have access to in my geopolitical location and institutional networks. It was and is my oxygen. My daily Twitter practices slowly evolved to become a central and most important part of my daily research activities. My network slowly grew and keeps growing. I worked and work hard at proving my value to the network – by curating content, by sharing, by caring.

One evening in 2015 when I logged on to Twitter I saw that due to a glitch my Twitter profile indicated that I had zero (yes, zilch) followers.  I know it sounds terribly immature but the fact that all my hard work just suddenly disappeared left me panicking. I responded to the crisis and tweeted “@Support No followers? No one following me? Twitter Zen – with no followers, & not following anyone, does anyone still (hear) see this tweet? (Prinsloo, P. [14prinsp], 2015). I know it sounds frivolous but my Twitter profile was so much more than just a profile or data-proxy. My Twitter profile was me. And due to a glitch on the platform, something of me was taken away from me. I was erased from the network.

Though the glitch was restored and I could breathe again, it left a permanent mark on my digital psyche of how vulnerable we actually are on these networks. It is as if you play in someone else’s garden, knowing that s/he can, at any time and for no reason at all chase you out and lock the gate.  This experience brought back painful memories of playing by myself in a park or playground as my awkward attempts to make friends never seemed to pay off. This incident, however also illustrated the precarity and even frivolousness of our networked identities and beings (Watters, 2016).

Having survived this ordeal just made me realise how precious and how an integral part of my research Twitter profile and daily praxis have become. So when Twitter hearts started to explode all over the place, and the number of advertisements and promoted tweets, I just kept and keep running – “Run Forest run!”

I start my day in the office at 5:30 am. For the next two hours I scan my Twitter feed as far back as I can – often working through 6-7 hours of tweets. This time of the morning allows me, being located in South Africa, of seeing and participating in the discourses and networks in networks to the East (the US and Canada) and West of South Africa (e.g. Australia). I would retweet and amplify something I find profound. I follow links. When I find something awesome, I also share it on my Facebook page, my Linkedin page, and my Minds.com page and send it via email to colleagues who are not part of my networks on these platforms.

I cannot (yet) imagine my scholarly life without Twitter.

The history of my use of Facebook also provides evidence of how I struggled to find my voice, my digital Facebook persona in deciding what I wanted to share and make public. I remember realising that I could not and did not want to share my most intimate feelings of desperation and depression (whether on personal or professional levels) with my ‘friends’… I somehow felt that they would not be interested in my scholarly discoveries… So I deleted my account. Facebook was not for me.

Grainne Conole (bless her soul) and her team from Leicester visited my institution and she encouraged me to revisit my decision not to use Facebook. I started afresh. I took a decision that I will use my Facebook only for professional and scholarly reasons. I don’t share to different groups. I just don’t have the time for that. I share what I want and if you don’t like it or find it boring, goodbye. Facebook allowed me to discover the love many of my scholarly friends have for cooking, for cats, for becoming a grandfather or mother, or pictures of their latest meal (…) or conference attendance in some or other exotic (or not…) location. As I found my feet on using Facebook as a way to share scholarly articles, as well as share my interests in gender and identity issues, my Facebook became an intimate space where I selectively share and witness some of the more personal details of scholars I respect.

Yes, I know Facebook uses my clicks and ‘likes’ to profile me. Yes I know the space is increasingly becoming creepy. I am increasingly guarded on what I share. I continuously look over my shoulder to see who is watching. I installed ad-blocking software, use Ghostery and my search engine is DuckDuckGo. I check my privacy settings almost on a daily basis. And yes, I know it will not undo the surveillance and the collection of my data.

But for now, I am playing with friends in the park, discovering, sharing, growing and learning. Yes, I am increasingly aware of those watching. But for now, Twitter and Facebook are my oxygen that allows me to breathe. For now…?

 

 

 

 

Posted in Change.mooc.ca, Uncategorized | Tagged , , , | 12 Comments

Book review: The Internet is not the answer (Andrew Keen, 2015)


The Internet is not the answer

There are too many examples to mention where the Internet and access to the Internet is lauded (sold?) as the answer. Recent examples include Facebook’s scheme to provide access to some services in India, of course through Facebook as platform. Despite the claims that this will provide millions with ‘free’ access, there is ample evidence that it will be anything but free. [See for example the critique by Vlad Savov (2015)]. Not only does millions see Facebook and Google as the Internet, Facebook increasingly promotes itself as the Internet through Internet.org focusing on providing access to “the Internet” to millions in developing world contexts. One example is Facebook’s attempt to roll out its ‘free’ access also the 100 million users on the African continent. For many concerned that students in developing world context lag behind due to a lack of access to the Internet, initiatives like the above are often too attractive to decline.

Against this backdrop and the uncritical acceptance of promises and Book image - the Internetclaims from Silicon Valley, the book by Andrew Keen – “The Internet is not the answer” (2015) is a must read.

Andrew Keen has been described as the Christopher Hitchens of the Internet – and most probably like Christopher Hitchens, Keen is hated and lauded. Amidst the hype and the Silicon Valley narrative that everything is broken and the Internet can fix it, Keen’s book “The Internet is not the answer” provokes, unsettles, possibly infuriates and can only be ignored with peril.

Central to the book is Keen’s proposal that “Rather than the answer, the Internet is actually the central question about our connected twenty-first-century world” (p. xiii). On buying the book I was reminded of other sceptical approaches and disruptions of the Silicon Valley narrative, such as the work by Audrey Watters – the Cassandra of #edtech; Neil Selwyn, Evgeny Morozov and many others. Late in 2015 Watters delivered a keynote titled “Technology imperialism, the Californian ideology, and the future of higher education” at the 26th ICDE World Conference hosted by the University of South Africa. (See my blog post on her keynote). These authors have profoundly shaped my own sensitivities and assumptions about the potential of (educational) technology.

For example, Selwyn (2014) suggests that educational technology is “a value-laden site of profound struggle that some people benefit more from than others – most notably in terms of power and profit” (p. 2). Selwyn (2014) also proposes that we need to see and engage with educational technology as a political tool and construct and an increasingly commercial field. We need to understand educational technology as “a knot of social, political, economic and cultural agendas that is riddled with complications, contradictions and conflicts” (p. 6). Understanding and scoping the potential of educational technology is therefore much “messier” (p. 9) than what Silicon Valley, governments and educational institutions would make us believe. Against the backdrop of the “truthiness” (p. 10) and “techno-romantic” (p. 13) assumptions in much of the discourses surrounding educational technology, Selwyn suggests that “a pessimistic stance is the most sensible, and possibly the most productive, perspective to take” (p. 14). Such a pessimistic and sceptical approach “is at least willing to accept that digital technology is not bringing about the changes and transformations that many people would like to believe” (p. 15). Selwyn’s approach does not result in despondency, but rather in “an active engagement with continuous alternatives” (p. 16). As such “The Internet is not the answer” engages very critically and pessimistically (in the sense that Selwyn and Watters uses the term) with the promises and realties surrounding the Internet.

Keen summarises his book in the Preface and in attempting to provide a review of the book, I cannot summarise the main gist of this book better than Keen himself.

The more we use the contemporary digital network, the less economic value it is bringing to us. Rather than promoting economic fairness, it is a central reason for the growing gulf between rich and poor and the hollowing out of the middle class. Rather than making us wealthier, the distributed capitalism of the new networked economy is making most of us poorer. Rather than generating more jobs, this digital disruption is a principal cause of our structural unemployment crisis. Rather than creating more competition, it has created immensely powerful new monopolists like Google and Amazon.

Its cultural ramifications are equally chilling. Rather than creating transparency and openness, the Internet is creating a panopticon of information-gathering and surveillance services in which we, the users of big data networks like Facebook, have been packaged as their all-too-transparent product. Rather than creating more democracy, it is empowering the rule of the mob. Rather than encouraging tolerance, it has unleashed such a distasteful war on women that many no longer feel welcome on the network. Rather than fostering a renaissance, it has created a selfie-centered culture of voyeurism and narcissism. Rather than establishing more diversity, it is massively enriching a tiny group of young white men in black limousines. Rather than making us happy, it’s compounding our rage (pp. xiii-xiv).

The preceding two paragraphs almost read like a manifesto of what the Internet is not. Like these two paragraphs, the book often left me breathless, as Keen produces one piece of evidence after the other, like a passionate prosecutor who knows that s/he only has limited time to capture the imagination of the jury, and increasingly, the TV audiences and social media streams. The pace and amount of evidence can, however, also be the book’s drawback – there is almost too much and the fervour with which Keen presents his case that the Internet is not the answer, can be a mind-numbing experience. As Keen builds his argument that the Internet is not the great equaliser, and that the Internet has, so far, not delivered on the initial promise, the thoroughness of the book may also be its drawback?

Keen agrees that “the Internet is not all bad” (p. 8), but he claims that “the hidden negatives outweigh the self-evident positives” (p. 9) and that those who think there is more positive to the Internet “may not be seeing the bigger picture” (p. 9). It is interesting, that while I thoroughly enjoyed Eli Pariser’s book “The filter bubble”, Nicholas Carr’s “The shallows” and more recently Dave Egger’s “The circle”, the pace and almost religious fervour with which Keen charges and destroys the myth that the Internet is the answer becomes, at times, almost too much.

Despite feeling out-of-breath following Keen as he races through the history of the Internet and several industries that were destroyed as a result of this, there are many, many brilliant analyses of the impact and forces behind the reality that every place is connected to everywhere else in one big and ever-increasing distributed network. The leit motif throughout the book is the proposal that the “Internet has created new values, new wealth, new debates, new elites, new scarcities, new markets, and above all, a new kind of economy” (p. 33). This new kind of economy is anything but cooperative in nature, or result in more equal and just distribution… In stark contrast to the hype and the claims to the contrary, the “Internet is dominated by winner-take-all companies like Amazon and Google that are not monopolising vast swaths of our information economy” (p. 36). Keen proposes that “the rules of this new economy are thus those of the old industrial economy – on steroids” (p. 47).

Keen’s analysis shies away from easy answers and steers clear of some of the other unnuanced (in my opinion) critiques of the ‘self’ in a networked age. For example, Keen states that “our contemporary obsession with public self-expression has complex cultural, technological, and psychological origins that can’t be exclusively traced to the digital revolution” (p. 106. Despite the complex and mutually constitutive factors shaping public self-expression in our current age, there is little doubt that the statement “I update, therefore I am” (p. 106) cuts deep into our personal and collective digital practices. It would seem as “if we have no thought to Tweet or photo to post, we basically cease to exist” (p. 107; Keen quoting Malkani, 2013). Not only has “shameless self-portrait… emerged as a dominant mode of expression” it may have become “proof of our existence in the digital age” (p. 107).

The Internet does not, despite the claims, “empower the week, the unfortunate, those traditionally without a voice” but the Internet “has… compounded hatred towards the very defenceless people it was supposed to empower” (p. 149). The Internet heralds “Big hatred meets big data” (p. 151, Keen quoting Seth Stephens-Davidowitz). Throughout Keen’s book there is an ominous refrain of the role of Silicon Valley creating a new medieval world – “a jarring landscape of dreadfully impoverished and high-crime communities like East Palo Alto, littered with unemployed people on food stamps, interspersed with fantastically wealthy and entirely self-reliant tech-cities…” (p. 206).

As antidote to the hype and the unwarranted claims that the Internet provides equal opportunity for all and contributes to a more just and equal world, Keen suggests that history as opposite of forgetting, is the answer. “It’s particularly through the lens of nineteenth – and twentieth-century history that we can best make sense of the impact of the Internet on twenty-first-century society. The past makes the present legible” (p. 215). Throughout the book Keen refers to not only the history of the Internet, but also relates other dramatic changes such as the demise of Kodak, the clothing industry in London, and the music industry – to mention but a few. If I understand Keen correctly, it would seem as if he suggests that understanding not only how technological advances disrupted these industries, but also the reasons for these disruptions, may allow us to not have too many stars in our eyes considering the impact of the Internet. The basic claim is that none of these technological revolutions or disruptions “transformed the role of either power or wealth in the world” (p. 216). Keen strongly suggests that the Internet in its current form will definitely not “translate into a less hierarchical or unequal society” but it will, instead of “openness and the destruction of hierarchies” compound “economic and cultural inequality” and create “a digital generation of masters of the universe” (p. 218).

Keen furthermore bemoans the fact that the main role-players in the Internet not only enjoy higher profitability margins than ever before, but they are also “less harassed by governments that their predecessors” (p. 218). The sum total of the current grip the new masters of the universe (think Amazon, Google, Facebook, Instagram…) is the fact that these masters not only acts in the dark but are also unaccountable to the public and governments. Keen seems to propose that stronger and more extensive regulation and transparency will go a long way to realise (some of) the early ideals of the Internet. Despite this proposition, Keen (p. 223-224) quotes Ignatieff who asks “whether elected governments can control the cyclone of technological change sweeping through their societies.”

I, for one, doubt it. It is not that I don’t think that regulation and legislation can steer the Internet towards more accountability and transparency, but I somehow suspect that we underestimate the power multinational corporations and the corporate-military-government industry have over politicians and governments.

Keen recognises that the answer cannot be only more regulation and he proposes not only to have a Bill of Rights but also a Bill of Responsibilities “that establishes a new social contract for every member of networked society” (p. 226).

Keen (p. 227) concludes and agrees (p. 227) with Jarvis that central to our conversations about the role and impact of the Internet should be the question “What kind of society are we building here?” Therefore the “Internet may not (yet) be the answer, but it nonetheless remains the central question of the first quarter of the twenty-first century” (pp. 227-228). In an interesting addition to the paperback version, Keen added an “Afterword”, written a year since the first publication of the book in 2014. In the Afterword, he is much more hopeful that “the Internet can indeed become a successful operating system for the twenty-first-century connected life” (p. 234).

I hope he is right, but I don’t hold my breath.

References

Keen, A. (2015). The Internet is not the answer. London, UK: Atlantic Books.

Selwyn, N. (2014). Distrusting educational technology. Critical questions for changing times. New York, NY: Routledge.

 

Posted in Uncategorized | Tagged , , , , | Leave a comment

Algorithmic decision-making in higher education: There be dragons there…


There be dragons there

Algorithms do not have agency. People write algorithms. Do not blame algorithms.

Do not blame the drones. The drones are not important. The human operators are important. The human operators of algorithms are not lion tamers.

 Do not blame the drones for making you depressed. Do not blame the algorithms for blowing up towns. Oceania has not always been at war with Eastasia (Ellis, n.d.)

I am neither a data scientist nor have any background in computer science. I am educator and researcher with a keen interest in how we engage with student data, issues pertaining to privacy and increasingly, the potential and harm in algorithmic decision-making in higher education.  Amidst claims and promises that algorithmic decision-making will assist higher education to make better and faster decisions about student applications, personalising student learning and assessment and increasing student retention and success, I cannot help but feel uncomfortable about the design, accountability and unintended consequences of algorithms in higher education. Reading “The black box society” by Frank Pasquale (2015), work by John Danaher (2014), Evgeny Morozov (2013) the provocation piece by Barocas, Hood and Ziewitz (2013) and the unfolding of unease with the scope and impact of the use of artificial intelligence and machine learning only strengthen my discomfort.

I  also would like to acknowledge the many conversations with a colleague of mine who would often be bemused (if not irritated) by my concerns about algorithms – their reach, their design and how they shape our world. If he was to edit this blog, he would have immediate cautioned against the implication that algorithms have agency and act independent of human design and intention.  Whenever I would share an article about how algorithms shape our lives, he would always state: No, it is not the algorithm; it is the person (or team) who designed the algorithm. He would emphasise that the algorithm is but the tool in the hand of the designer… If algorithms do discriminate, it is because they were designed to discriminate. If algorithms are biased, it is because the biases of their designers and developers were captured.

So the fact that algorithms increasingly shape my world, why does this make me feel so uncomfortable and uneasy?

Was I just so uncomfortable when humans used to make decisions about what I am worth, what my credit worthiness is, what my health risk profile is? Were humans less biased than algorithms? Or to what extent does the bias inherent in algorithms impact me more than when the same bias was present in my dealings with a human behind a desk? Am I just so uncomfortable with algorithms when I rely on them for the best route to a destination, find the cheapest airfare, or when I enjoy reading a book found as a result of a recommender system?

I trust algorithms when searching for a cheap airfare or the best route to avoid a traffic jam, so why am I so uncomfortable with algorithms in higher education? Can I trust them?

Ooops. I did it again. Is it not strange that it is somehow easier to grasp and deal with the impact of algorithms on our lives by subscribing human qualities to them?

Povey and Ransom (2000) found that students in the field of using technology in mathematics anthropomorphise technology as a mechanism to voice their discomfort with the seeming power struggle between technology and humanity. These authors point out that talking about technology in human terms is “an aspect of a wider contemporary discourse on the relationship between technology and society” (p. 60).  They refer to the public uproar when a computer beat world chess champion Garry Kasparov:

The outcome of the match] threw some commentators into a tizzy. After all, they reasoned, how long can it be before [a computer], say, launches all the missiles in the world or gets its own late-night talk show? (People Magazine, 26 May 1997, p. 127 as quoted by Povey and Ransom, 2000, p. 60)

Does this sound familiar to the way we talk about algorithms?

Fox (2010) also explored the phenomenon of anthropomorphism and states that it “is rampant in all cultures and religions” (par.2), “ingrained in human nature” (par. 8) from the way we worship gods that resemble ourselves to how we make sense of a “largely meaningless world” (par. 16). He proposes that we “are more likely to anthropomorphise when faced with unpredictable situations or entities (par. 17). By anthropomorphising non-human actors and technology, we claim a “sense of control” (par. 18), belonging and connection.  As a result we build relationships with our computers, talk about the stock market as climbing higher or flirting with higher values… (par. 29).

Specific to our anthropomorphising technology, Buchanan-Oliver, Cruz and Schroeder (2010) claim that the way we speak about technology originates from “deeply-seated anxieties toward the mythic figure of the cyborg, which has been read as monstrous, Frankensteinian icon inviting both sympathy and revulsion” (p. 636). As such talking about algorithms as having agency may resemble “technology as prosthesis” (Buchanan-Oliver et al, 2010, p. 642) or an extension of humanity (with all of our hopes, goodwill, fears, bias and hunger for power). The way we talk about algorithms may furthermore herald increasingly porous boundaries between human and posthuman where we “mutate at the rate of cockroaches, but we are cockroaches whose memories are in computers, who pilot planes and drive cars that we have conceived, although our bodies are not conceived at these speeds” (Sterlarc and Orlan quoted by Buchanan-Oliver et al, 2010, p. 644). So technology and algorithms are no longer external tools to be used by us, but have become “an intrinsic part of human subjectivity” (Buchanan-Oliver et al, 2010, p. 645).

And then there is the ever increasing threat that machines will outsmart us… (see Dockrill’s post of 11 December , 2015 – “Scientists have developed an algorithm that learns as fast as humans . That’s the tipping point right there, folks.”). Or see this collection of essays edited by John Brockman (2015) – “What to think about machines that think.”

While it is tempting to think in terms of a binary – reflecting on situations where decisions are exclusively made by humans compared to a situation where decisions are exclusively made (sic) by algorithms, the reality is much more nuanced as John Danaher  in a post of June 15 (2015) points out – see the diagram below.

Danaher

Image credit: Danaher (2015, June 15)

What I like about Danaher’s proposal is that it provides a more nuanced understanding of not only the different phases of data collection and use, but the way the framework relate these different phases to different combinations of human and algorithm interaction.  Different combinations are possible where, for example, algorithms collect the information, but the analysis is done by either only humans, or shared with algorithms, or done by algorithms with humans supervising or done by algorithms without human supervision. (For a full discussion of the different combinations and implications, see Danaher, 2015).

Important to note is that there is possibly another layer embedded in the above diagram recognising the fact that algorithms may have been written exclusively by humans, or developed as a result of iterative cycles of artificial intelligence.  Embedded and encoded in these processes are human bias and goodwill – where accountability for and the ethical implications of this mutually constitutive process resemble a ‘wicked’ problem described as  “a social or cultural problem that is difficult or impossible to solve for as many as four reasons: incomplete or contradictory knowledge, the number of people and opinions involved, the large economic burden, and the interconnected nature of these problems with other problems.”

The ‘wickedness’ of understanding my discomfort with trying to make sense of algorithmic decision-making in this blog is also due to my lack of theoretical tools and academic background to fully understand how algorithms work, and secondly, to explain the intricacies of my discomfort about ‘losing control’…

Having acknowledged my possible lack of understanding, allow me then to voice in layperson’s terms my discomfort and understanding. Though I acknowledged that we should not think in terms of binaries – humans making decisions versus algorithms (created by humans) making the decisions (sic), thinking in terms of a binary gives me a handle on this slippery phenomenon.

The definition and scope/scale of the knowledge about me

In times past when humans made decisions about my credit worthiness, they most probably relied on past documents and records (on file) of my interactions with their institution, and information I provided on the prescribed application form with my signature to confirm that I told the truth.  I cannot deny that my race, gender, language, and home address played (and still play) a crucial role in their decisions. Depending on who interviewed me (and in those years it was almost certain to have been a white male), my chances on being successful was fairly certain. Even today if I was to have been interviewed by a person of a different race and home language, the legacy of my whiteness may actually carry the day.

In the context of algorithmic decision making, I am not sure (actually I never know) which sources of information collected in which context of for what purpose are being used to inform the final decision. As each source of information is combined with another source, each source’s boundary of integrity collapses and the biases and assumptions that informed the collection of data in one context, are collapsed and morphed with other sources of information with their own biases and contexts.  We are becoming increasingly small and vulnerable nodes in the lattice of information networks, where, like the character of ‘K’ in Franz Kafka’s The Trial, we are never told what the allegations are against us, what the sources of information are. All we are told is that “Proceedings have been instituted against you…” (Kafka, 1984, p. 9) without, ever, having access to know what they know.

[See the essay by John Danaher on issues regarding fairness in algorithmic decision-making (2015, November 5)].

The actor, algorithms and data brokers

Recently, Waddell (2015) in an interview with Phillip Rogaway (author of “The moral character of cryptographic work”) stated that “computer scientists and cryptographers occupy some of the ivory tower’s highest floors” (par. 1). The notion of the “data scientist” is emerging as an all –encapsulating title and the “hottest” job title in the 21st century (Chatfield, Shlemoon, Redublado, & Rahman, 2014:2). They have also been called “gods” (Bloor, 2012), “rock stars” (Sadkowsky, 2014), “high priests” (Dwoskin, 2014; Nielsen, 2014); “engineers of the future” (van der Aalst, 2014) and “game changers” (Chatfield, Shlemoon, Redublado, & Rahman, 2014:2).

So, can I trust them to write algorithms if the designers of algorithms don’t see their algorithms as deeply political and flowing from and perpetuating existing power relations, injustices and inequalities, or creating new ones? To what extent do they accept responsibility for the social impact of their algorithms? To what extent can they be held accountable?

In the past, when decisions were made on my financial future or my application to register or my application for health benefits, decisions were also made by humans, often with less information to their disposal than the scope of information that algorithms scrape and use to produce judgements and evaluations. These humans were not less biased, or more informed than the designers and writers of algorithms, so why am I uncomfortable with algorithms?

One possible reason is that the creators of algorithms are faceless, non-accountable, hidden in a Kafkaesque maze where algorithms feed off one another in perpetual cycles of mutation.  Where I could have petitioned the human who made the decision a number of years ago, or asked to see his or her supervisor, the creators of algorithms are hidden, faceless actors who create and destroy futures with indemnity.

Do algorithm writes need a code of conduct as proposed by John Naughton (6 December, 2015)? Do we need algorithmic angels (Koponen, 2015, April 18)? Is it possible to govern algorithms, and what should be in place? (Barocas, Hood & Ziewitz, 2013).

What are our options? What are our students’ options?

What are our options when my whole life becomes a single digit (Pasquale, 2015, October 14)?

In the context of the quantification fetish in higher education where we count everything, what are the ethical implications when we reduce the complexity of our students’ lives to single digits, to data points on a distribution chart? What are the ethical implications when we then use these to allocate or withhold support to spend our resources on more ‘worthy’ candidates in the game of educational roulette? What does due process look like in a world of automated decisions (Citron and Pasquale, 2014)?

What are our options? In a general sense I think the proposal by Morozov (2013) is an excellent start. He proposes four overlapping solutions namely (1) to politicise the issue of the scope and use of algorithms; (2) learn how to sabotage the system by refusing to be tracked; (3) create “proactive digital services”; and (4) abandon preconceptions. (See the discussion of Danaher, 2014).

In the light of the asymmetrical power relationship between higher education and our students, we simply cannot ignore the need to reflect deeply on our harvesting and use of student data. When we see higher education as firstly a moral endeavour our commitment to “do no harm” implies that we should be much more transparent about our algorithms and decision-making processes.

Who will hold higher education accountable for the data we harvest and our analyses?

Among other stakeholders, we cannot ignore the role of students. They have the right to know. They have a right to know what our assumptions and understandings of their learning journeys are. They should demand that we do not assume that their digital profiles resemble their whole journey. They have a right to due process.

If only they knew.

Image credit: Image compiled from two images –

http://blog.wikimedia.org/2014/07/11/how-to-research-beyond-wikimetrics/

http://pic1.win4000.com/wallpaper/a/52047e1caa613.jpg

References

Bloor, R. (2012, December 12). Are the data scientists future CEOs? [Web log post]. Retrieved from http://insideanalysis.com/2012/12/are-the-data-scientists-future-ceos/

Buchanan-Oliver, M., Cruz, A., & Schroeder, J. E. (2010). Shaping the body and technology: Discursive implications for the strategic communication of technological brands. European Journal of Marketing44(5), 635-652.

Chatfield, A.T., Shlemoon, V.N., Redublado, W., & Rahman, F. (2014). Data scientists as game changers in big data environments. ACIS. Retrieved from http://www.researchgate.net/publication/268078811_Data_Scientists_as_Game_Changers_in_Big_Data_Environments

Citron, D. K., & Pasquale, F. A. (2014). The scored society: due process for automated predictions. Washington Law Review89, 1-33.

Dwoskin, E. (2014). Big data’s high-priests of algorithms. The Wall Street Journal, Aug, 8. Retrieved from http://tippie.uiowa.edu/management-sciences/wsj2014.pdf

Fox, D. (2010). In our own image. New Scientist, 208(2788), 32-37.

Kafka, F. (1984). The trial. Translated by Willa and Edwin Muir. London, UK: Penguin.

Nielsen, L. (2014). Unicorns among us: understanding the high priests of data science. Wickford, Rhode Island: New Street Communications.

Povey, H., & Ransom, M. (2000). Some undergraduate students’ perceptions of using technology for mathematics: Tales of resistance. International Journal of Computers for Mathematical Learning, 5(1), 47-63.

Sadkowsky, T. (2014, July 2). Data scientists: The new rock stars of the tech world. [Web log post]. Retrieved from https://www.techopedia.com/2/28526/it-business/it-careers/data-scientists-the-new-rock-stars-of-the-tech-world

van der Aalst, WM. (2014). Data scientist: The engineer of the future. In Enterprise Interoperability VI (pp. 13-26). Springer International Publishing. Retrieved from http://bpmcenter.org/wp-content/uploads/reports/2013/BPM-13-30.pdf

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Posted in Uncategorized | Tagged , | 3 Comments

Seeing Jesus in toast: Irreverent ideas on some of the claims pertaining to learning analytics


Jesus in toast“In 2004 Diane Duyser sold a decade-old grilled cheese sandwich that bore a striking resemblance to the Virgin Mary. She got $28,000 for it on eBay” (Matter, 2014), and in 2009 Linda Lowe found an image of Jesus staring at her from a piece of toast. The phenomenon is called pareidolia and has been explained as complete natural, and all too human (Liu, Li, Feng, Li, Tian, & Lee, 2014; Tanne, 2014). Pareidolia is described as “psychological phenomenon involving a stimulus (an image or a sound) wherein the mind perceives a familiar pattern of something where none actually exist.”

In the world of data and predictive analysis, a relatively similar phenomenon is called apophenia – to see patterns where none actually exist (boyd & Crawford, 2012). In the context of higher education where we have access to ever increasing volumes, velocity and variety of student digital data, apophenia is uncomfortable companion in the analysis of student data. Harvesting and combining student data from disparate sources opens up the opportunity and risk infer relations unthinkable ten years ago.

As we engage with an ever increasing number and scope of data students we may be tempted to rush to look for patterns without considering our own assumptions and epistemologies, how we select our data, how we slice and dice, how we clean our data sets, and how we deal with the often uncomfortable silences in our data and analysis. We may be tempted to make claims on the quality and impact of student engagement based on the number of clicks, their participation in online discussion forums, and the number of downloads of resources. From this evaluation on the depth and quality of their engagement (often based on the quantification of our definition of ‘engagement’) we then design personalised assessments, curricula and the allocation of resources. Increasingly our predictive analyses also utilise the immense speed and scope of algorithmic decision destining students for a learning journey over which they have very little control or insight regarding its machinations.

It may be worthwhile to heed the words of Silver (2012) who warns that in noisy systems with underdeveloped theory – there is a real danger to mistake the noise for a signal, and not realising that the noise pollutes our data with false alarms and “setting back our ability to understand how the system really works” (p. 162). In a context where our abilities to harvest and analyse student data may outpace not only our regulatory frameworks, but more importantly, our theoretical and ethical dispositions, the latest issue of the Journal of Learning Analytics comes as a welcome relief.

In this issue the relationship between learning analytics and theory is recognised as important (“Why theory matters more than ever in the age of big data”) and this relationship is explored in contexts such as “Theory-led design of instruments and representations of learning analytics”, a “‘Beyond time-on-task: The relationship between spaced study and certification in MOOCs.” Other contributions include, inter alia, “Does seeing one another’s gaze affect group dialogue? A computational approach” and “Learning analytics to support teachers during synchronous CSCL: Balancing between overview and overload.”

As the amount, velocity and variety of student data increase, so will the noise and the potential to see patterns which either don’t exist, or patterns that do not contribute to understanding student success as the result of increasingly messy and complex, non-linear interactions between context, students and institutions at the intersection of curricula, pedagogies, assessment and institutional (in)efficiencies and operations. We therefore need to slow down our conversations (e.g. Selwyn, 2014) on the potential of learning analytics and create spaces to think critically about our epistemologies and ontologies that shape our harvesting and analyses of student data.

The plea to slow down our discussions on and embrace of the potential of learning analytics seem out of place with higher education as an increasingly privatised and/or costly commodity, characterised by an obsession on the return on investments, just-in-time products delivered by just-in-time labour aiming to get the products off the shelves in the fastest possible time. In contrast to slowing down we would like to speed up our search for signal amidst the noise. There may, however, be a danger that we may find Jesus in toast.

Let me state it clear that there is no doubt in my mind that evidence or data and the ethical harvesting and analysis of student data can and should inform the management of teaching and learning, the development of curricula, and assessment and student support strategies and interventions.My question is not whether we should harvest and analyse student data, but rather how do we engage with student data and the search for relationships that matter in the light of the fact that higher education is an open and recursive system? How do we engage with evidence of what works where the evidence does not tell us whether the intervention was appropriate and ethical?

I agree with Biesta (2007, 2010) that the issue is not about the usefulness of evidence, but rather how we define evidence, what we include and exclude, acknowledging our assumptions about data and an honesty about what the implications of our research design decisions. Current evidence-based decision making practices favour technocratic modes that assume that “the only relevant research questions are questions about the effectiveness of educational means and techniques, forgetting, among other things, that what counts as ‘effective’ crucially depends on judgments about what is educationally desirable” (Biesta, 2007, p. 5). We need to understand our limitations of our designs when we explore, in an open, semiotic and recursive system, how interventions work. We need to acknowledge our search for the “magic bullet of causality” (Biesta, 2010, p. 496). “Much talk about ‘what works’ … operates on the assumption of a mechanistic ontology that is actually the exception, not the norm in the domain of human interaction” (Biesta, 2010, p. 497).

There is a real danger that we think of and apply learning analytics as if education is a closed and isolated environment such as a laboratory setting where we can limit the amount of variables and report on those variables that made a difference. Contra to such an understanding it would be safer (and most probably closer to the reality) to think of education in terms of the Cynefin framework’s (Snowden & Boone, 2007) proposal of simple, complicated, complex and chaotic environments. In all of these four environments it is possible to harvest and analyse evidence, but with very different results. I have a strong suspicion that we think of education as simple and at most complicated, while education is, most probably, rather complex if not chaotic at times. In complicated environments, Snowden and Boone (2007) suggest that cause-and-effect relationships are discoverable but that there is more than one ‘right’ answer. In complex systems, there are no right answers and though it may be possible to trace correlation, causation becomes almost impossible to prove, and more importantly, to replicate.

At many educational conferences when I listen to reports and evidence on interventions that resulted in an increase in student retention and success, I cannot help to see Jesus smiling to me from a piece of toast.

Image credit: Adapted from http://www.thegryphon.co.uk/2014/09/the-ig-nobel-awards-the-science-prize-that-rewards-the-weird-and-wonderful/

References

Biesta, G. (2007). Why “what works” won’t work: evidence-based practice and the democratic deficit in educational research, Educational Theory, 57(1), 1–22. DOI: 10.1111/j.1741-5446.2006.00241.x.

Biesta, G. (2010). Why ‘what works’ still won’t work: from evidence-based education to value-based education, Studies in Philosophy of Education, 29, 491–503. DOI 10.1007/s11217-010-9191-x.

boyd, D., & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, communication & society, 15(5), 662-679.

Liu, J., Li, J., Feng, L., Li, L., Tian, J., & Lee, K. (2014). Seeing Jesus in toast: Neural and behavioral correlates of face pareidolia. Cortex, 53, 60-77.

Selwyn, N. (2014). Distrusting educational technology. Critical questions for changing times. New York, NY: Routlegde.

Silver, N. 2012. The signal and the noise: Why most predictions fail – but some don’t. New York, NY: Routledge.

Snowden, D. J., & Boone, M. E. (2007). A leader’s framework for decision making. Harvard Business Review, 85(11). Retrieved from http://aacu-secure.nisgroup.com/meetings/ild/documents/Symonette.MakeAssessmentWork.ALeadersFramework.pdf

Tanne, J. H. (2014). Seeing Jesus in a piece of toast and other scientific discoveries win Ig Nobel awards. BMJ, 349, g5764.

Posted in Uncategorized | Tagged , , , , | 2 Comments

Students’ role in learning analytics: From data-objects to collaborators


Your-fingerprint-may-be-used-f1

Image credit: http://www.forensic-news.com/hackers-copy-fingerprint-data-from-android-devices-3/

In most of the discourses on learning analytics students and their data are seen as mere data-points or data-objects and recipients of services. Student data are the source for many hours of enjoyment as data analysts, educators, student support staff and administrators count and correlate a range of variables such as the number of clicks, the number of downloads, and the number of attempts to pass a course. We then add gender, race, age, employment status and also physical addresses as a proxy for socio-economic income to the mix, and voila, we can now personalise their learning based on our analyses because we know what they need…

There is, however, a real danger that learning analytics may serve as prosthetics and as parasitic, supplementing and replacing authentic learning and frantically monitoring “little fragments of time and nervous energy” (Murphie, 2014, p. 19). While I would like to propose that seeing students as collaborators and participants, rather than data-points and objects, can assist in humanising learning analytics, we need to understand the frantic gathering of student data in the current context of higher education characterised by higher education institutions dancing to the tune of “evidence-based management” (see Biesta, 2007. 2010), where we ascribe to “measurement mania” (Birnbaum, 2001:197), audit practices as “rituals of verification” (Power, 1999, 2004) and the “neoliberal lexicon of numbers” (Cooper, 2014: par. 5). This is the new normal where funding follows performance instead of preceding it (Hartley, 1995), where the success of prediction and the increase in returns on investment have become survival practices in “Survivor. The higher education series.”

Considering the amount of student data higher education institutions have access to, and the fiduciary duty of higher education to address concerns about sometimes appalling rates of student failure (Slade & Prinsloo, 2013), lack of effective or appropriate student support and institutional failures – higher education cannot afford not to collect and analyse student data. Knowing more about our students raises however a number of ethical issues such as whether they know that we are observing them and analysing their online behaviours for clues to determine the allocation of resources, the need for intervention correlated with the cost of intervention and the probability that the intervention will have the necessary effect and therefore guarantee a return on investment. There are also issues related to whether they have access to their digital profiles, whether they can verify their records and provide context, whether they are protected against downstream use and who will have access to their records and for what purposes. And finally, there is also the issue with the ethics of knowing – and knowing implies responsibility. Once we know, for example, that a student from a poor neighbourhood (his/her address as proxy) has not logged on for a week, or has revealed having difficulty in coping with the course materials – we have an obligation to act. So, while there are also ethical issues in not-knowing while we could have known, we often forget the ethical responsibilities that come into play when we know…

In the broader discourses of education and specifically higher education, various practices have been imported from the medicine. Examples include institutional review boards (Carr, 2015) and the seemingly inappropriate belief in the gospel of evidence-based management (Biesta, 2007, 2010). There is also the practice of educational triage – which although practiced, has not really entered the main discourses relating to student support, learning analytics, and institutional research (Manning, 2012; Prinsloo & Slade, 2014). In the context where higher education increasingly faces changes in funding regimes and increases costs and demands, Prinsloo and Slade (2014) ponder on the question: “how do we make moral decisions when resources are (increasingly) limited?” (p. 309). As a way to engage with balancing costs with the impact and ethical dimensions of decisions educational triage provides an interesting perspective into balancing cost, care and ethics. (For a full discussion regarding educational triage as construct and practice, see Prinsloo and Slade, 2014).

Though the links, overlaps and differences between practices in the medical fraternity and higher education are acknowledged, we cannot ignore the fact that our thinking about ethics in educational research have been hugely impacted on by ethical principles, guidelines and practices in the medical fraternity.

So it was with interest when I read about the Precision Medicine Initiative (PMI) launched early in 2015 by the White House. The PMI aims to, among other things, to provide individualised health care based on collaboration between patients, medical staff and researchers, using collected and gifted data to prevent or effectively treat diseases. Amidst the hype of empowering patients and the offering of individualised care, of particular interest for me as educator and researcher in the scope and ethical implications of learning analytics is the initiative’s aim “to engage individuals as active collaborators – not just as patients or research objects.”

Excursus: In the context of learning analytics, student involvement as participants was first mentioned by Kruse & Pongsajapan (2012) and expanded by Prinsloo & Slade, 2014, 2015).

It is important to note that as far as I could establish there are two versions of the privacy and trust principles, namely 8 July and 9 November. Interesting is the critique and feedback (dated 4 August, 2015) of William W. Horton (Chair: ABA Health Law Section) of the American Bar Association on the proposal (dated 8 July).

This initiative is founded on a number of privacy and trust principles ensuring that the right to privacy and ethical research are guaranteed. The principles include issues regarding governance; transparency; participant empowerment respect for participant preferences; data sharing, access, and use; and data quality and integrity (version of 9 November). The 8 July version included a section on the assumptions that informed the principles, as well as a section on “security.” The earlier (8 July) version’s section titled “Reciprocity” is titled “Participant empowerment through access to information” in the version of 9 November.

The earlier version of the PMI (dated 8 July) acknowledges a number of assumptions (p. 2) such as:

  • “Participants will be partners in research, and their participation will be entirely voluntary”
  • “Participants will play an integral role in the cohort’s governance through direct representation on committees”
  • With regard to the variety of data sources the PMI states – “Participants will be able to voluntarily contribute diverse sources of data – including medical records, genomic data, lifestyle information, environmental data, and personal device and sensor data.”
  • The PMI is also clear that participants will have “access their own medical information.”
  • Security is addressed and the PMI guarantees– “a robust data security framework will be established to ensure that strong administrative, technical, and physical safeguards.”
  • With regard to ‘consent’ the PMI suggests that consent is dynamic, ongoing and negotiated – “Given the anticipated scope and duration of PMI, single contact consent at the time of participant enrolment will not be sufficient for building and maintaining the level of public trust we aim to achieve. A consent process that is dynamic and ongoing will better serve the initiative’s goals of transparency and active participant engagement.”

There are no reasons provided why the assumptions contained in the version of 8 July have been deleted from the final version of 9 November. As far as I could assess, outside of the issue of ‘security’ all the assumptions are sufficiently covered in the final version (9 November).

In the next section, I provide a short, and selective overview of the PMI (9 November) and specifically focus on aspects that I suspect can benefit our policies, frameworks and practices in learning analytics.

For example, under ‘Governance’ (p. 2) the PMI suggests the following:

  • Substantive participant representation – “should include substantive participant and community representation at all levels of program oversight, design, and implementation” (emphasis added). Interesting, the 8 July version enshrined active collaboration among participants with regard to governance by stating as fundamental assumption – “Participants will play an integral role in the cohort’s governance through direct representation on committees established to oversee cohort design and data collection, use, management, security, and dissemination” (p. 2; emphasis added).

Excursus: Would there be a difference between substantive and direct participation and representation? How practical would such a principle be in learning analytics?

  • The PMI further state that “Risks and potential benefits of research for families and communities should be considered in addition to risks and benefits to individuals. The potential for research conducted using PMI cohort data to lead to stigmatization or other social harms should be identified and evaluated through meaningful and ongoing engagement with relevant communities.”

Excursus: Currently there is no or very little oversight in learning analytics despite the ethical concerns and the potential of harm. Willis, Slade and Prinsloo (in press) suggests that while the prevention of harm and discrimination in research falls under the purvey of institutional review boards, there is currently no clear guidance whether learning analytics qualifies as research and therefore needs oversight by the IRB, and if learning analytics is not considered as research, how and who will oversee the ethical implications and the potential of harm and discrimination.

The PMI addresses the issue of ‘Transparency’ as follows:

  • Transparency is accepted as dynamic, ongoing and participatory. The 8 July version (p.4) stated explicitly – “To ensure participants remain adequately informed throughout participation in the cohort, information should be provided at the point of initial engagement and periodically thereafter. Information should be communicated to participants clearly and conspicuously concerning: how, when, and what information and specimens will be collected and stored; generally how their data will be used, accessed, and shared; the goals, potential benefits, and risks of participation; the types of studies for which the individual’s data may be used; the privacy and security measures that are in place to protect the participant’s data; and the participant’s ability to withdraw from the cohort at any time, with the understanding that data included in aggregate data sets or used in past studies and studies already begun cannot be withdrawn” (emphasis added). The 9 November version (p. 2) covers all of these in separate points but also add that “Communications should be culturally appropriate and use languages reflective of the diversity of the participants” (p. 3; emphasis added).

Excursus: As Prinsloo and Slade (2015) suggest, it is crucial that we think past the binaries of simply opting in or out, to a more nuanced, and continued, dynamic interaction between students as collaborators in a student-centric learning analytics at different intervals during the process. The PMI’s suggestion that participants should be involved at the “point of initial engagement and periodically thereafter” and fully informed regarding “how, when, and what information and specimens will be collected and stored; generally how their data will be used, accessed, and shared; the goals, potential benefits, and risks of participation; the types of studies for which the individual’s data may be used; the privacy and security measures that are in place to protect the participant’s data; and the participant’s ability to withdraw from the cohort at any time, with the understanding that data included in aggregate data sets or used in past studies and studies already begun cannot be withdrawn.”

  • “Participants should be promptly notified following discovery of a breach of their personal information.”

Excursus: Currently, due to the lack of oversight or regulation of learning analytics (See Willis et al, 2015) students are often left without any recourse and may not even know when there was a breach of their personal information.

Of specific interest is the comment of Horton (ABA Health Section) who suggests that there is a need to take cognizance of the OECD guidelines regarding the disclosure and protection of information with specific mention of the potential that the shared information may be used for commercial purposes, that the information may be used for non-research purposes e.g., insurers, employers, law enforcement, etc.

With regard to ‘Respecting Participant Preferences” (pp. 3-4), the 9 November version include reference to the following

  • To be “broadly inclusive, recruiting and engaging individuals and communities with varied preferences and risk tolerances concerning data collection and sharing.” Interestingly, the 8 July version also included the use of information and not only collection and sharing. What does this deletion signify?
  • Another interesting point is that the PMI (both versions) stress “participant autonomy and trust.” This is achieved through “a dynamic and ongoing consent and information sharing process” (9 November).

Excursus: What does “participant autonomy” mean in the context of the asymmetrical power relationship between medicine and patients? How autonomous can patients really make decisions in the light of not necessarily understanding the implications of withdrawal and secondly, the often different opinions on options? The PMI states that participants will be able to “re-evaluate their own preferences as data sharing, user requirements, and technology evolve” (p. 3) – and while this should be lauded, what are the implications of withdrawal?

 “Participants should be able to withdraw their consent for future research use and data sharing at any time and for any reason, with the understanding that data included in aggregate data sets or used in past studies and studies already begun cannot be withdrawn” (p. 3; emphasis added)

Interestingly, the PMI (in both versions) state that consent cannot be withdrawn in the event of studies that have already begun. Do researchers and participants know, from the onset, how long the harvesting, analysis and use of information will be? What happens if the scope changes due to new insights? Students in higher education contexts may be protected by the fact that courses have predetermined durations and that most learning analytic projects take place on course level.

With regard to withdrawal from the PMI, in the feedback provided by Horton (ABA Health Section) he suggests the “development of policies, procedures and notifications to Participants which would clarify when and how the right [to withdraw] could be exercised, distinguish between identifiable and non-identifiable personal information, and articulate the limits on the withdrawal of information already in use” (p. 10).

The principle of “Participant empowerment through access to information” (p. 4) (called ‘Reciprocity’ in the 8 July version) includes, inter alia the following:

  • “should enable participants’ access to the medical information they contribute to the PMI in consumer-friendly and innovative ways”
  • That “educational resources should be made available to participants to assist them in understanding their health information and to empower them to make informed choices about their health and wellness”

Excursus: Currently, in higher education, anecdotal evidence suggests that students do not have access to the learning analytic data that institutions have collected about them. And secondly, what I find interesting about the PMI is the commitment to make available education resources to assist participants to understanding the analyses and findings so that patients can make informed decisions.

In many of the discourses surrounding learning analytics, the emphasis is on the benefits institutions will derive from learning analytics to make choices on behalf of students. The PMI seems to turn this around and ensure that patients will be able to make the decisions affecting their health.

The feedback provided by Horton (ABA Health Section) is crucial in this regard. Horton (p. 11) raises the issue whether making the information and analysis available to patients may create “an expectation of medical intervention and/or treatment.” This raises the issue mentioned earlier in this blog whether knowing brings with it the responsibility of caring?

The second issue Horton raises with regard to this principle is whether healthcare providers have the necessary capabilities and capacities to usefully apply the PMI data. This is also pertinent in higher education context where it is not clear whether faculty or student support teams have the necessary capabilities and capacities to interpret and act on the analyses.

Horton also raises a third issue that is also pertinent when thinking about learning analytics and that is the need that “Unvalidated findings should never be communicated to participants because they may be confusing and obfuscate any meaningful application to improve healthcare” (p. 12; emphasis added). Anecdotal evidence in learning analytics suggests that often (mostly?) that there is often not time to validate the findings of a learning analytics’ project due to the often urgent need to intervene. And secondly, when we accept that education is an open and recursive system, the next student cohort will, in all probability, be different, with different needs and characteristics.

So what do we do, in the context of learning analytics, to validate findings and ensure appropriate and ethical interventions?

Under the principle of “Data Sharing, Access, and Use” (9 November, p.45) the following elements are mentioned:

  • “Certain activities should be expressly prohibited, including the sale or use of data for targeted advertising”
  • There should also be “multiple tiers of data access—from open to controlled”
  • Unauthorized re-identification and re-contact of participants will be expressly prohibited.” Interestingly the 8 July version also included “…and consequences should accompany such actions.”
  • The 8 July version had this problematic statement that was removed from the 9 November version “PMI cohort should maintain a link to participant identities in order to return appropriate information and to link participant data obtained from difference sources” (p. 5; emphasis added).

Excursus: In the current mist surrounding learning analytics, and the lack of and uncertainty regarding ethical oversight, higher education institutions will have to make very clear what activities would be strictly prohibited, e.g. the sale of data, or for targeted advertising (often by the providing institution)…In the context of “surveillance capitalism” (Zuboff, 2015) and the monetary value of data, we need to be clear on the exact boundaries of our governance of student data. There are increasing concerns about the implications of the Trans-Pacific Partnership on the exchange value of data.

There is also the issue of the potential of the re-identification of students which is currently not strictly addressed in learning analytics.

Horton raises the issue that “access [to data] should be permitted only where there are assurances that the recipient of the data has adequate protections in place to ensure the privacy of participants and confidentiality of their information” (p. 13). In the context and practice of learning analytics in higher education and the fact that oversight and ethical clearance are mostly left to the integrity of individuals accessing the information, what are the implications?

There is nothing out of the ordinary under the principles of ‘Data Quality and Integrity’ (in both versions). Interesting is the fact that the section dealing with ‘Security’ (in the 8 July version) was totally scrapped in the 9 November version. What makes this more puzzling is the fact that Horton suggested two pages of aspects to be addressed under ‘security’ before it was removed in the version of 9 November.

(In)conclusions

Despite concerns of finding parallels between research and practices in the fields of medicine and education, I read the PMI (both versions) with interest. While I am not sure the PMI addresses the complexities in the asymmetrical power relationship between medicine and medical practices and patients, the PMI does point to some issues that higher education can/should consider in order to move beyond seeing students as data objects and the providers of data (often without them knowing) to collaborators and participants.

References

Biesta, G. (2007). Why “what works” won’t work: evidence-based practice and the democratic deficit in educational research, Educational Theory, 57(1), 1–22. DOI: 10.1111/j.1741-5446.2006.00241.x.

Biesta, G. (2010). Why ‘what works’ still won’t work: from evidence-based education to value-based education, Studies in Philosophy of Education, 29, 491–503. DOI 10.1007/s11217-010-9191-x.

Birnbaum, R. (2001). Management fads in higher education. Where they come from, what they do, why they fail. San Francisco, CA: Jossey-Bass.

Carr, C. T. (2015). Spotlight on ethics: institutional review boards as systemic bullies. Journal of Higher Education Policy and Management, 37(1), 14-29.

Cooper, D. (2014, December 5). Taking pleasure in small numbers: How intimately are social media stats governing us? [Web log post]. Retrieved from http://blogs.lse.ac.uk/impactofsocialsciences/2014/12/05/taking-pleasure-in-small-numbers/

Hartley, D. (1995). The ‘McDonaldisation’of higher education: food for thought? Oxford Review of Education, 21(4), 409-423.

Kruse, A. & Pongsajapan, R. (2012). Student-centered learning analytics. CNDLS Thought Paper. 1-12. Retrieved from: https://cndls.georgetown.edu/m/documents/thoughtpaper-krusepongsajapan.pdf

Manning, C. (2012, March 14). Educational triage [Web log post]. Retrieved from http://colinmcit.blogspot.co.uk/2012/03/educational-triage.html

Murphie, A. (2014). Auditland. PORTAL Journal of Multidisciplinary International Studies, 11(2). Retrieved from https://epress.lib.uts.edu.au/journals/index.php/portal/article/view/3407/4525

Power, M. (1999).The audit society: Rituals of verification. 2nd edition. London, UK: Oxford University Press.

Power, M. (2004). Counting, control and calculation: Reflections on measuring and management. Human Relations, 57(6), 765-783.

Prinsloo, P., & Slade, S. (2014). Educational triage in open distance learning: Walking a moral tightrope. The International Review of Research in Open and Distributed Learning, 15(4), 306-331.

Prinsloo, P., & Slade, S. (2015, March). Student privacy self-management: implications for learning analytics. In Proceedings of the Fifth International Conference on Learning Analytics and Knowledge (pp. 83-92). ACM.

Slade, S. & Prinsloo, P. (2013). Learning analytics ethical issues and dilemmas. American Behavioral Scientist, 57(10), 1510-1529.

Willis, J.E., Slade, S., & Prinsloo, P. (in press). Ethical oversight of student data in learning analytics: a typology derived from a cross-continental, cross-institutional perspective. Submitted to a special issue of ETR&D titled “Exploring the Relationship of Ethics in Design and Learning Analytics: Implications for the Field of Instructional Design and Technology.”

Zuboff, S. (2015). Big other: surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 30(1), 75-89.

 

Posted in Change.mooc.ca, Uncategorized | Tagged , , , , | Leave a comment

Of heresies, heretics, and the (im)possibility of hope in higher education


Heresies

 Detail: Bucher Boys (1985/86) by Jane Alexander

 Abandon all hope, ye who enter here (Inferno, Dante)

 Amidst the absolute horror, fear and nausea triggered by events such as the recent attacks in #Beirut, #Paris and #Mali, and the continued sponsored and condoned violence in #Palestine and #Yemen, there is, I suspect, a deep-seated questioning of “how is all of this still possible in the 21st century?”

What happened to ‘progress’ and the belief that a better world is possible and achievable? Where does the current (and possible permanent?) disillusionment leave the belief that education is the key driver to ‘progress’ and will, per se, result in a more just and equal society? Last week a meme circulated on social media with a picture of Malala Yousafzai with the words “With guns you can kill terrorists. With education you can kill terrorism.”

I wish I could believe. But I cannot. Not that I don’t want to believe, but somehow I suspect that we overestimate the potential of education, on its own, to address generations of injustice, poverty and inequality. Call me a heretic if you want, allow me to explore the possibility that unbridled economic growth and progress is a heresy. And education, as this heresy’s servant.

Allow me then, for a brief moment of your time, to reconsider our continued and uncritical belief that humanity, progressively gets better… As conversation partner to this blog I take the work by John Gray (2002, 2004) and Zygmunt Bauman (2004, 2011, 2012). Considering the work of Gray, John Banville said that “John Gray has always been the odd-sheep-out” and John Preston called Gray a “prophet of doom.” Bauman’s work has also been up for criticism and his work characterised as full of “sombre warnings and dark judgments.” Despite these criticisms, I agree with the assessment that “”Bauman on a bad day is still far more stimulating than most contemporary social thinkers.”

In contemplating education in this interregnum (Best, 2015), allow me then to reflect on some of the points made by John Gray and Zygmunt Bauman.

Gray (2002) suggests that “The uses of knowledge will always be shifting and crooked as humans are themselves. Humans use what they know to meet their most urgent needs – even if the result is ruin” (p. 28). Regarding humanity’s belief in progress as inevitable Gay (2004) suggests that “the core of the belief in progress is that human values and goals converge in parallel with our increasing knowledge. The twentieth century shows the contrary. Human beings use the power of scientific knowledge to assert and defend the values and goals they already have. New technologies can be used to alleviate suffering and enhance freedom. They can, and will, also be used to wage war and strengthen tyranny” (p. 106; emphasis added).

Considering the advances since the Enlightenment against the backdrop of the absolute horrors of the two World Wars and the banality of evil as represented by the mushroom clouds over Hiroshima and Nagasaki, the Holocaust and the Vietnam war, one would have expected that humanity would permanently shied away from the abyss. And yet we didn’t and we still don’t.

Instead of doing everything we possibly can to steer clear of the abyss, we are “messing with forces on a grand scale” (Martin, 2006, p. 15) – on a number of levels. Amidst the many challenges facing humanity are, according to Martin (2006) environmental collapse, extreme poverty, unstoppable global migrations, non-state actors with extreme weapons, and violent religious extremism resulting in a new Dark Age.

Depending on your worldview, many suggest that higher education have unreservedly bought into the neoliberal project of globalisation as championed by the World Bank, the International Monetary Fund, the World Trade Organisation and the corporate-industrial-military complex. Economic growth is a leitmotif in curricula and is sold (often literally) as prerequisite for human progress despite evidence suggesting that “economic growth does not translate into the growth of equality” (Bauman, 2011, p. 50). Amidst the unbridled consumerism and decadent and rampant (if not rapacious) capitalism, inequalities have increased and the number of displaced people is the biggest in human history. The millions of displaced and permanently unemployed are classified as disposable, as the collateral waste of progress, those who have become permanently redundant suggest a new normal, the new, permanent “Other” (Bauman, 2004).

We live in times where “the incomprehensible has become routine” (Bauman, 2006, p. 14). As we built higher walls around our gated communities, closed our borders, and increased our entry requirements, our fears just got worst.

Fear is at its most fearsome when it is diffuse, scattered, unclear, unattached, unanchored, free floating, with no clear address or cause; when it haunts us with no visible rhyme or reason, when the menace we should be afraid of can be glimpsed everywhere but is nowhere to be seen (Bauman, 2006, p. 2)

Welcome to the 21st century.

As humanity spirals from one genocide to the next, we have increasing reason to question the gospel of Progress. John Gray (2004) state that the “belief in progress is the Prozac of the thinking classes” (p. 3). I would like to add to this, that the unquestioned belief that education, on its own, can make a difference is most probably co-prescribed with Prozac.

Gray (2004) makes the claim that “History is not an ascending spiral of human advance, or even an inch-by inch crawl to a better world. It is an unending cycle in which changing knowledge interacts with unchanging human needs. Freedom is recurrently won and lost in an alternation that includes long periods of anarchy and tyranny, and there is no reason to suppose that this cycle will ever end” (p. 3). Gray therefore contests the view that the Enlightenment set humanity on an irreversible path of progress where advances in science and technology will, per se, result in a better world. For many Gray’s statements amount to heresies, such as his claim that “The lesson of the century that has just ended is that humans use the power of science not to make a new world but to reproduce the old one – sometimes in newly hideous ways… Knowledge does not make us free” (2004, p. 6).

After the recent events in #Beirut #Paris #Yemen and #Palestine the statement by Gray that “The most striking development in politics in the past two decades is that this apocalyptic mentality has gone mainstream” (p. 10). In the light of the increasing influence of religious fundamentalism (whether in America or Iraq), terror has become “privatised” – that cannot be tolerated, but also not eliminated (2004, p. 11).

Gray (2004) furthermore states that no one cold have foreseen that “irrationality would continue to flourish alongside rapid advances in science and technology” (p. 18). Even the hope sold by Silicon valley that technology will solve all of humanity’s problems is without foundation as “[t]here is no power in the world that can ensure that technology is used only for benign purposes” (2004, p. 20). He continues:

“We are not masters of the tools we have invented. They affect our lives in ways we cannot control – and often cannot understand. The world today is a vast, unsupervised laboratory, in which a multitude of experiments are simultaneously underway” (p. 21).

And

“We can’t control our new technologies because we don’t really grasp the totality of their effects. And there is a deeper reason why we are not masters of our technologies: they embody dreams of which we are not conscious and hopes that we cannot bear to give up” (p. 22).

Sobering is the proposal by Gray that homo sapiens is actually homo rapiens with ambitions that are limitless, but living on an earth with resources that are irrevocably finite.

Our present way of life is more prone to disruption than most people think, and its fragility is increasing. We tend to think that as global networks widen and deepen, the world will become a safer place, but in many contexts the opposite is true. As human beings become closely interlinked, breakdowns in one part of the world spread more readily to the rest (p. 61)

In the light of the fact that democracy is seen and sold (literally) as one of the biggest (and deadliest) exports of the United States and its partners/alliances, and the claim that education should help spread the belief in one-size-fits-all type of democracy (Giroux, 2015), Gray (2004) states that “After all the babble about the irresistible spread of democracy and free markets, the reality is war, protectionism and the shifty politics of secrecy and corruption in other words, history as usual” (p. 66).

Despite the advances in science improving the lives of many, Gray (2004) states “Science cannot end the conflicts of history. It is an instrument that humans use to achieve their goals, whether winning wars or curing the sick, alleviating poverty or committing genocide” (p. 70).

So where does this leave us? How do we then teach without necessarily believing? How is hope possible in this interregnum?

A good place to start will be to acknowledge that “Knowledge is not an unmixed good; it can be as much a curse as a blessing. If the superseded science in the first half of the twentieth century could be used to wage two hideously destructive world wars, how will the vastly superior science of today be used?” (Gray, 2004, pp. 70-71). I really think that all curricula should have a warning attached to them – advising curriculum developers, instructional designers, students, and quality assurers (to mention but a few) that “knowledge is not an unmixed good”…

Is education willing to acknowledge that “the knowledge maps of the past have, to a large extent, been proven to be fragile and (possibly) the illegitimate offspring of unsavory liaisons between ideology, context and humanity’s gullibility in believing in promises of unconstrained scientific progress” (Prinsloo, 2016 – in press).

Will we teach different curricula if we believed that “history might be cyclical, not progressive, with the struggles of the earlier eras returning and being played out against a background of increased scientific knowledge and technological power” (Gray, 2004, p. 101)?

How do we help students to “read the world” (Freire, 1972, p. 120) – to recognise the metanarratives, the curricula sold-as-truth, engage with claims and counter-claims, realise (in more than one sense) their agency as constrained, entangled, fractured and possible?

(In)conclusions:

Realising, at least for me, that history may be cyclical, that knowledge and advances in technology may serve evil or justice, give me a sense of purpose, if not hope. In this permanent interregnum where “the old is dying and the new cannot be born” (Gramsci, 1971, p. 110), a certain amount of morbidity and skepticism may be in order.

Image credit

https://upload.wikimedia.org/wikipedia/commons/c/c6/Butcher_boys2.jpg

References

Bauman, Z. (2004). Wasted lives. Modernity and its outcasts. Cambridge, UK: Polity Press.

Bauman, Z. (2011). Collateral damage. Social inequalities in a global age. Cambridge, UK: Polity Press.

Bauman, Z. (2012). On education. Conversations with Riccardo Mazzeo. Cambridge, UK: Polity Press.

Best, S. (2015). Education in the interregnum: an evaluation of Zygmunt Bauman’s liquid-turn writing on education. British Journal of Sociology of Education, 1-18.

Freire, Paulo. Pedagogy of the oppressed. Harmondsworth, UK: Penguin.

Giroux, H. A. (2015). Democracy in Crisis, the Specter of Authoritarianism, and the Future of Higher Education. Journal of Critical Scholarship on Higher Education and Student Affairs, 1(1), 7.

Gramsci, A. (1971). Selections from the Prison Notebooks of Antonio Gramsci. Edited by Q. Hoare and G. N. Smith. New York, NY: International Publishers.

Gray, J. (2002). Straw dogs. Thoughts on humans and other animals. London, UK: Granta Books.

Gray, J. (2004). Heresies. Against progress and other illusions. London, UK: Granta Books.

Prinsloo, P. (2016 – in press). Metaliteracy, networks, agency and praxis: an exploration. Chapter accepted in T. Mackey and T. Jacobson (eds.), Metaliteracy in Practice

 

 

 

Posted in Open reflections | Tagged , , , | 2 Comments

And then everything turned to beige… The quantified academic in an age of academic precarity


Wonderland_Walker_5[It almost feels obscene not to reflect on the events in #Beirut #Paris #Yemen (the list is endless). I am, however, permanently nauseous, speechless and saturated with claims and counter-claims and the increasing evidence that the events of the last few days, weeks, months and years are becoming the new normal. So forgive me if I don’t share my reflections at this stage. I.Just.Can’t.]

It is that time of the year again where I must report back on not necessarily what I have done or the quality of what I have done, but how much I have done… How many articles? How many chapters? How many single-authored or co-authored articles? How much money did I earn in the form of external research grants? How much am I worth? How many citations? How much did my h-index increase since I last looked and reported on? How many? How much?

My value contribution as a scholar and a researcher is being diluted to a single score on a template.

I have become a score, a number, and a single digit. Nothing more. But so much less.

And then everything turns into beige. I become a zombie. A member of the living dead.

Please understand that I don’t yearn for a romanticized past of academic freedom that (most probably) never was. As I meandered from being an administrator, to a professional to an academic to being a research professor (a journey of 20 years) I heard stories of ‘how good things were’, and ‘how things changed.’ As a fairly recent addition to the ever smaller number of faculty carrying increasingly bigger administrative tasks, workloads and participating in the dance of life and death as researcher, I can only reflect on the ‘now.’

Let me bore you with some detail.

In the beginning of the year I contract with my supervisor to deliver on a number of deliverables. As a research professor there is not much to negotiate. For example, I have four I four key performance namely – academic leadership (10%), research (70%), community engagement (15%) and academic citizenship (5%). The four performance areas are fixed, and though the percentages are negotiable (within a certain range depending on your job title); they are relatively bizarre and of very little consequence – except to play a role in the weighting of your single digit percentage in your final rating.

Let me illustrate the point: The key performance area of ‘academic citizenship’ includes my participation in academic and institutional committees, task teams, etc. This year I was the Scientific Chair for a major international conference and the amount of time I spent in meetings, reviews, and planning was much, much more than 5%. I could have increased it to 10% (the maximum) but then I would have had to steal 5% from another key performance area. Which one? And does it really matter? You only have so many hours (a point to which I will return)…

Except for the percentages allocated to each key performance area, there is also the ‘content’ of each of these key areas that are increasingly hard-coded – meaning that the definitions and criteria are predetermined, fixed and scores automatically calculated. Of my four key performance areas, two are hard-coded – academic leadership and research.

Academic leadership has the following criteria:

  • Contributions to innovative and cutting edge practices in research (carrying a 15% weighting of the allocated weight of 10%). Except for the highly problematic issues of defining ‘innovation’ and ‘cutting edge practices’ left out of the picture, is the fact that what may be innovative or cutting edge in one disciplinary field may not be appropriate in another field. How does one allocate a score to innovative and cutting edge? How innovative and cutting edge can you really be if your research application survived the horrendous ethical review process, journal editorial policies and reviewers who may have very different ideas regarding innovation and cutting-edge…
  • Successful submission of research plans of mentees to the Chair of Department (15% weighting). Score. No indication of how detailed these plans should be. No indication that a mentorship relationship is complex, layered and embedded in power.
  • Mentorship (70% weighting). Very interesting is the fact that your score is determined not only by the number of mentees, but specifically whether you can provide proof that you assisted them in applications for external funding or rating by the National Research Foundation (NRF). The quality of the mentorship is impoverished to assistance for external funding or ratings.

That’s it. That is ‘academic leadership.’ Hard-coded. Scored. Tick. Transfer score to template. Done.

Research as key performance area consists of three criteria:

  • Research outputs and successful completion of postgraduate students (80% weighting). Outputs are clearly defined, there is no doubt regarding what is regarded as an output – if it is on an approved list, tick. If you co-authored the article, half a tick. If your postgraduate student has not successfully completed his or her qualification in the period of reporting, no tick…, despite the immense amount of time, energy, blood, sweat and tears the supervisory process meant for both the supervisor and student.

It helps that you are required to report on the last 3 or 5 years as this allows for the time and different iterations involved in the publication process. What are not considered at all are your scholarly contributions in other formats, many of them increasingly peer-reviewed and public.

  • Grant applications for external funding (10% weighting). If you have evidence that you applied for external funding, you get a score of 2. If your grant was successful, you are average, a 3. If you have been successful with more than one external grant application, you get a 4 and you attain a full score if the total amount of grant money allocated to your research is in excess of 2 million ZAR.

Money talks. Money makes the world (of research) go round.

  • The third criterion is being rated by the National Research Foundation (NRF) (weighting of 10%). If the NRF rated you’re the gravitas of your research as being acknowledged on a national level (a rating of C3 on a scale of C1-3), you are allocated a score of 3

And at the end, the Excel spreadsheet tallies the scores and who I am, the quality and gravitas of my scholarly contribution becomes a number. Nothing more, so much less.

Don’t get me wrong. I don’t mind being evaluated. I don’t mind presenting evidence of what I think I am ‘worth’ as a researcher. A lot of the evidence of my standing in the field is anyway, and increasingly, public, out there already, such as comments on my blogs, remarks on Twitter, and references by other scholars. Performing my scholarship in public is an immensely risky, but also very rewarding exercise.

So how do I make sense of this? How do I manage to dance to the beat that is not of my making?

I do understand that in the context of increased internationalisation and competition, higher education increasingly sells education as a privatised and mostly costly commodity, with an emphasis on a return on investments, just-in-time products delivered by just-in-time labor aiming to get the products (aka students) off the shelves in the shortest possible time.

I do understand that the efficiency of higher education is increasingly monitored and evaluated by auditing and control processes harvesting and analysing data and evidence as/in ‘rituals of verification’ (Power, 1999, 2004). Higher education increasingly resembles “Auditland” (Murphie, 2014:10) where these ‘rituals of verification” and auditing processes beget more auditing processes in never-ending cycles that affects all learning, teaching and research (Murphie, 2014). Higher education as “Auditland” where we all spy on one another, compete for scarce resources, trying to outdo the other with providing more evidence, getting those grants, getting the invitations as keynotes, getting ahead.

I do understand that since the 1990s higher education has become increasingly a fast food factory or outlet characterised by the mantra of efficiency, quantification, calculability, predictability and control (Hartley, 1995). I do understand that changes in funding regimes resulted in the directive that “funding … follows performance rather than precedes it” (Hartley, 1995: 418). I do understand that the dominant narrative in higher education is that of a positivist, quantification fetish (Prinsloo, 2014), informed by a “neoliberal lexicon of numbers” (Cooper, 2014: par. 5), the “tyranny of numbers” and “measurement mania” (Birnbaum, 2001:197).

Despite ‘understanding’, I also see these pervasive auditing and verification rituals as mediated and mediating tools in service of evidence-based decision making that creates technical systems that simultaneously serve as prosthetics and as parasitic supplementing and replacing authentic learning and frantically monitoring “little fragments of time and nervous energy” (Murphie, 2014:19).

And amidst all of this I have become colonised as a single digit score and I become a spectator to my own drama of losing myself. And then life turns a lighter shade of beige (see the wonderful post by Kate Bowles, and Frank, 1995).

What are my options? I wish I could shout with Kate Bowles that “you don’t have my consent to use my remaining time in this way.” I know that time is irreplaceable. I know that my luck will run out, possibly sooner than later. (See the thought-provoking post by Adele Horin).

I know I cannot sustain the frantic activity, the restlessness, the panic, the dread of the next performance appraisal. I will have to make a plan.

And of course I do. I work longer. I work harder. Weekends are a non-event. The only difference between office hours on a weekend and during the week is the fact that I am (most probably) the only one in the building.

But why, you would ask? Why? Don’t I have a life?

I am 56 years old, white academic in a post apartheid South Africa where my options for finding employment outside of academia or in international higher education is zero. Don’t get me wrong. This is not about ignoring the many privileges I had and still have as a white male. I don’t subscribe to the notion of victimhood and suffering that is prevalent in the much of the current white, Afrikaner discourse. There is a vast difference between recognising the “historic burden of whiteness” and self-abasement or lame apologies (O’Hehir, 2014). My race and gender, and the socioeconomic circumstances of my family allowed me to play on a field while many others were excluded from playing. (Also see Bowler, 2014; Crosley-Corcoran, 2014; Gedye, 2014).

So I cancel a doctor’s appointment. I fit in a physiotherapist appointment in during my lunch hours (lunch?) for the unbearable pain in my neck.

Let me make it very clear that I love writing. I absolutely love doing research. I love the excitement of living on the edge of publishing, of awaiting feedback from editors on the submission of your last article. I am an adrenaline junkie. Forgive me mother for I have sinned. I say my three Hail Josephs and accept the invitation to write a chapter for a book. Imagine. They identified me as a worthy scholar and they would be honored if I would accept their invitation to contribute a chapter. Of course I would. The honor is mine. As a white African on the outside of the hallowed spaces of North Atlantic knowledge production, I am just so honored. How can I refuse? As a white male I am a neutered stray dog with no teeth in my home institution. So when I get invited to participate in an international publication, how can I refuse? Anyway, it is a sole authored chapter and, as such, worth so many points in the template during the end-of-the-year assessment.

So I graciously accept. “I would be honored.”

So I cancel breakfast on Saturday morning to be earlier in the office. The color beige is not bad at all.

Image credit: https://upload.wikimedia.org/wikipedia/commons/0/06/Wonderland_Walker_5.jpg

References

Birnbaum, R. (2001). Management fads in higher education. Where they come from, what they do, why they fail. San Francisco, CA: Jossey-Bass.

Bowler, D. (2014, August 27). Defined by your ‘blackness.’ [Web log post]. Retrieved from http://ewn.co.za/2014/08/27/opinion-danielle-bowler-defined-by-blackness

Bowles, K. (2013, November 24). Irreplaceable time. [Web log post]. Retrieved from https://musicfordeckchairs.wordpress.com/2013/11/24/irreplaceable-time/

Bowles, K. (2014, March 5). Walking and learning. [Web log post]. Retrieved from http://musicfordeckchairs.wordpress.com/2014/03/05/walking-and-learning/

Cooper, D. (2014, December 5). Taking pleasure in small numbers: How intimately are social media stats governing us? [Web log post]. Retrieved from http://blogs.lse.ac.uk/impactofsocialsciences/2014/12/05/taking-pleasure-in-small-numbers/

Crosley-Corcoran, G. (2014, August 5). Explaining white privilege to a broke white person. [Web log post]. Retrieved from http://www.huffingtonpost.com/gina-crosleycorcoran/explaining-white-privilege-to-a-broke-white-person_b_5269255.html

Frank, A.W. (1995). The wounded storyteller. Body, illness, and ethics. London, UK: The University of Chicago Press, Ltd.

Gedye, L. (2014, October 13). Jou past se poes. The Con. [Web log post]. Retrieved from http://www.theconmag.co.za/2014/10/13/jou-past-se-poes/

Hartley, D. (1995). The ‘McDonaldisation’of higher education: food for thought? Oxford Review of Education, 21(4), 409-423.

Horin, A. (2015, November 16). Dear reader, my luck has run out. The Age. Retrieved from http://www.theage.com.au/comment/dear-reader-my-luck-has-run-out-20151116-gkzzpi.html

Murphie, A. (2014). Auditland. PORTAL Journal of Multidisciplinary International Studies, 11(2). Retrieved from https://epress.lib.uts.edu.au/journals/index.php/portal/article/view/3407/4525

O’Hehir, A. (2014, August 30). Why acknowledging white privileged is not surrendering to ‘white guilt.’ [Web log post]. Retried from http://www.salon.com/2014/08/30/why_acknowledging_white_privilege_is_not_surrendering_to_white_guilt/

Power, M. (1999).The audit society: Rituals of verification. 2nd edition. London, UK: Oxford University Press.

Power, M. (2004). Counting, control and calculation: Reflections on measuring and management. Human Relations, 57(6), 765-783.

Prinsloo, P. (2014, October 22). Mene, mene, tekel, upharsin: researcher identity and performance. Inaugural lecture at the University of South Africa. Retrieved from http://www.researchgate.net/profile/Paul_Prinsloo/publication/267395307_Mene_mene_tekel_upharsin_researcher_identity_and_performance/links/544f2f200cf29473161bf642.pdf

Posted in Change.mooc.ca | Tagged , , , , , , | 6 Comments