Image credit: http://www.forensic-news.com/hackers-copy-fingerprint-data-from-android-devices-3/
In most of the discourses on learning analytics students and their data are seen as mere data-points or data-objects and recipients of services. Student data are the source for many hours of enjoyment as data analysts, educators, student support staff and administrators count and correlate a range of variables such as the number of clicks, the number of downloads, and the number of attempts to pass a course. We then add gender, race, age, employment status and also physical addresses as a proxy for socio-economic income to the mix, and voila, we can now personalise their learning based on our analyses because we know what they need…
There is, however, a real danger that learning analytics may serve as prosthetics and as parasitic, supplementing and replacing authentic learning and frantically monitoring “little fragments of time and nervous energy” (Murphie, 2014, p. 19). While I would like to propose that seeing students as collaborators and participants, rather than data-points and objects, can assist in humanising learning analytics, we need to understand the frantic gathering of student data in the current context of higher education characterised by higher education institutions dancing to the tune of “evidence-based management” (see Biesta, 2007. 2010), where we ascribe to “measurement mania” (Birnbaum, 2001:197), audit practices as “rituals of verification” (Power, 1999, 2004) and the “neoliberal lexicon of numbers” (Cooper, 2014: par. 5). This is the new normal where funding follows performance instead of preceding it (Hartley, 1995), where the success of prediction and the increase in returns on investment have become survival practices in “Survivor. The higher education series.”
Considering the amount of student data higher education institutions have access to, and the fiduciary duty of higher education to address concerns about sometimes appalling rates of student failure (Slade & Prinsloo, 2013), lack of effective or appropriate student support and institutional failures – higher education cannot afford not to collect and analyse student data. Knowing more about our students raises however a number of ethical issues such as whether they know that we are observing them and analysing their online behaviours for clues to determine the allocation of resources, the need for intervention correlated with the cost of intervention and the probability that the intervention will have the necessary effect and therefore guarantee a return on investment. There are also issues related to whether they have access to their digital profiles, whether they can verify their records and provide context, whether they are protected against downstream use and who will have access to their records and for what purposes. And finally, there is also the issue with the ethics of knowing – and knowing implies responsibility. Once we know, for example, that a student from a poor neighbourhood (his/her address as proxy) has not logged on for a week, or has revealed having difficulty in coping with the course materials – we have an obligation to act. So, while there are also ethical issues in not-knowing while we could have known, we often forget the ethical responsibilities that come into play when we know…
In the broader discourses of education and specifically higher education, various practices have been imported from the medicine. Examples include institutional review boards (Carr, 2015) and the seemingly inappropriate belief in the gospel of evidence-based management (Biesta, 2007, 2010). There is also the practice of educational triage – which although practiced, has not really entered the main discourses relating to student support, learning analytics, and institutional research (Manning, 2012; Prinsloo & Slade, 2014). In the context where higher education increasingly faces changes in funding regimes and increases costs and demands, Prinsloo and Slade (2014) ponder on the question: “how do we make moral decisions when resources are (increasingly) limited?” (p. 309). As a way to engage with balancing costs with the impact and ethical dimensions of decisions educational triage provides an interesting perspective into balancing cost, care and ethics. (For a full discussion regarding educational triage as construct and practice, see Prinsloo and Slade, 2014).
Though the links, overlaps and differences between practices in the medical fraternity and higher education are acknowledged, we cannot ignore the fact that our thinking about ethics in educational research have been hugely impacted on by ethical principles, guidelines and practices in the medical fraternity.
So it was with interest when I read about the Precision Medicine Initiative (PMI) launched early in 2015 by the White House. The PMI aims to, among other things, to provide individualised health care based on collaboration between patients, medical staff and researchers, using collected and gifted data to prevent or effectively treat diseases. Amidst the hype of empowering patients and the offering of individualised care, of particular interest for me as educator and researcher in the scope and ethical implications of learning analytics is the initiative’s aim “to engage individuals as active collaborators – not just as patients or research objects.”
Excursus: In the context of learning analytics, student involvement as participants was first mentioned by Kruse & Pongsajapan (2012) and expanded by Prinsloo & Slade, 2014, 2015).
It is important to note that as far as I could establish there are two versions of the privacy and trust principles, namely 8 July and 9 November. Interesting is the critique and feedback (dated 4 August, 2015) of William W. Horton (Chair: ABA Health Law Section) of the American Bar Association on the proposal (dated 8 July).
This initiative is founded on a number of privacy and trust principles ensuring that the right to privacy and ethical research are guaranteed. The principles include issues regarding governance; transparency; participant empowerment respect for participant preferences; data sharing, access, and use; and data quality and integrity (version of 9 November). The 8 July version included a section on the assumptions that informed the principles, as well as a section on “security.” The earlier (8 July) version’s section titled “Reciprocity” is titled “Participant empowerment through access to information” in the version of 9 November.
The earlier version of the PMI (dated 8 July) acknowledges a number of assumptions (p. 2) such as:
- “Participants will be partners in research, and their participation will be entirely voluntary”
- “Participants will play an integral role in the cohort’s governance through direct representation on committees”
- With regard to the variety of data sources the PMI states – “Participants will be able to voluntarily contribute diverse sources of data – including medical records, genomic data, lifestyle information, environmental data, and personal device and sensor data.”
- The PMI is also clear that participants will have “access their own medical information.”
- Security is addressed and the PMI guarantees– “a robust data security framework will be established to ensure that strong administrative, technical, and physical safeguards.”
- With regard to ‘consent’ the PMI suggests that consent is dynamic, ongoing and negotiated – “Given the anticipated scope and duration of PMI, single contact consent at the time of participant enrolment will not be sufficient for building and maintaining the level of public trust we aim to achieve. A consent process that is dynamic and ongoing will better serve the initiative’s goals of transparency and active participant engagement.”
There are no reasons provided why the assumptions contained in the version of 8 July have been deleted from the final version of 9 November. As far as I could assess, outside of the issue of ‘security’ all the assumptions are sufficiently covered in the final version (9 November).
In the next section, I provide a short, and selective overview of the PMI (9 November) and specifically focus on aspects that I suspect can benefit our policies, frameworks and practices in learning analytics.
For example, under ‘Governance’ (p. 2) the PMI suggests the following:
- Substantive participant representation – “should include substantive participant and community representation at all levels of program oversight, design, and implementation” (emphasis added). Interesting, the 8 July version enshrined active collaboration among participants with regard to governance by stating as fundamental assumption – “Participants will play an integral role in the cohort’s governance through direct representation on committees established to oversee cohort design and data collection, use, management, security, and dissemination” (p. 2; emphasis added).
Excursus: Would there be a difference between substantive and direct participation and representation? How practical would such a principle be in learning analytics?
- The PMI further state that “Risks and potential benefits of research for families and communities should be considered in addition to risks and benefits to individuals. The potential for research conducted using PMI cohort data to lead to stigmatization or other social harms should be identified and evaluated through meaningful and ongoing engagement with relevant communities.”
Excursus: Currently there is no or very little oversight in learning analytics despite the ethical concerns and the potential of harm. Willis, Slade and Prinsloo (in press) suggests that while the prevention of harm and discrimination in research falls under the purvey of institutional review boards, there is currently no clear guidance whether learning analytics qualifies as research and therefore needs oversight by the IRB, and if learning analytics is not considered as research, how and who will oversee the ethical implications and the potential of harm and discrimination.
The PMI addresses the issue of ‘Transparency’ as follows:
- Transparency is accepted as dynamic, ongoing and participatory. The 8 July version (p.4) stated explicitly – “To ensure participants remain adequately informed throughout participation in the cohort, information should be provided at the point of initial engagement and periodically thereafter. Information should be communicated to participants clearly and conspicuously concerning: how, when, and what information and specimens will be collected and stored; generally how their data will be used, accessed, and shared; the goals, potential benefits, and risks of participation; the types of studies for which the individual’s data may be used; the privacy and security measures that are in place to protect the participant’s data; and the participant’s ability to withdraw from the cohort at any time, with the understanding that data included in aggregate data sets or used in past studies and studies already begun cannot be withdrawn” (emphasis added). The 9 November version (p. 2) covers all of these in separate points but also add that “Communications should be culturally appropriate and use languages reflective of the diversity of the participants” (p. 3; emphasis added).
Excursus: As Prinsloo and Slade (2015) suggest, it is crucial that we think past the binaries of simply opting in or out, to a more nuanced, and continued, dynamic interaction between students as collaborators in a student-centric learning analytics at different intervals during the process. The PMI’s suggestion that participants should be involved at the “point of initial engagement and periodically thereafter” and fully informed regarding “how, when, and what information and specimens will be collected and stored; generally how their data will be used, accessed, and shared; the goals, potential benefits, and risks of participation; the types of studies for which the individual’s data may be used; the privacy and security measures that are in place to protect the participant’s data; and the participant’s ability to withdraw from the cohort at any time, with the understanding that data included in aggregate data sets or used in past studies and studies already begun cannot be withdrawn.”
- “Participants should be promptly notified following discovery of a breach of their personal information.”
Excursus: Currently, due to the lack of oversight or regulation of learning analytics (See Willis et al, 2015) students are often left without any recourse and may not even know when there was a breach of their personal information.
Of specific interest is the comment of Horton (ABA Health Section) who suggests that there is a need to take cognizance of the OECD guidelines regarding the disclosure and protection of information with specific mention of the potential that the shared information may be used for commercial purposes, that the information may be used for non-research purposes e.g., insurers, employers, law enforcement, etc.
With regard to ‘Respecting Participant Preferences” (pp. 3-4), the 9 November version include reference to the following
- To be “broadly inclusive, recruiting and engaging individuals and communities with varied preferences and risk tolerances concerning data collection and sharing.” Interestingly, the 8 July version also included the use of information and not only collection and sharing. What does this deletion signify?
- Another interesting point is that the PMI (both versions) stress “participant autonomy and trust.” This is achieved through “a dynamic and ongoing consent and information sharing process” (9 November).
Excursus: What does “participant autonomy” mean in the context of the asymmetrical power relationship between medicine and patients? How autonomous can patients really make decisions in the light of not necessarily understanding the implications of withdrawal and secondly, the often different opinions on options? The PMI states that participants will be able to “re-evaluate their own preferences as data sharing, user requirements, and technology evolve” (p. 3) – and while this should be lauded, what are the implications of withdrawal?
“Participants should be able to withdraw their consent for future research use and data sharing at any time and for any reason, with the understanding that data included in aggregate data sets or used in past studies and studies already begun cannot be withdrawn” (p. 3; emphasis added)
Interestingly, the PMI (in both versions) state that consent cannot be withdrawn in the event of studies that have already begun. Do researchers and participants know, from the onset, how long the harvesting, analysis and use of information will be? What happens if the scope changes due to new insights? Students in higher education contexts may be protected by the fact that courses have predetermined durations and that most learning analytic projects take place on course level.
With regard to withdrawal from the PMI, in the feedback provided by Horton (ABA Health Section) he suggests the “development of policies, procedures and notifications to Participants which would clarify when and how the right [to withdraw] could be exercised, distinguish between identifiable and non-identifiable personal information, and articulate the limits on the withdrawal of information already in use” (p. 10).
The principle of “Participant empowerment through access to information” (p. 4) (called ‘Reciprocity’ in the 8 July version) includes, inter alia the following:
- “should enable participants’ access to the medical information they contribute to the PMI in consumer-friendly and innovative ways”
- That “educational resources should be made available to participants to assist them in understanding their health information and to empower them to make informed choices about their health and wellness”
Excursus: Currently, in higher education, anecdotal evidence suggests that students do not have access to the learning analytic data that institutions have collected about them. And secondly, what I find interesting about the PMI is the commitment to make available education resources to assist participants to understanding the analyses and findings so that patients can make informed decisions.
In many of the discourses surrounding learning analytics, the emphasis is on the benefits institutions will derive from learning analytics to make choices on behalf of students. The PMI seems to turn this around and ensure that patients will be able to make the decisions affecting their health.
The feedback provided by Horton (ABA Health Section) is crucial in this regard. Horton (p. 11) raises the issue whether making the information and analysis available to patients may create “an expectation of medical intervention and/or treatment.” This raises the issue mentioned earlier in this blog whether knowing brings with it the responsibility of caring?
The second issue Horton raises with regard to this principle is whether healthcare providers have the necessary capabilities and capacities to usefully apply the PMI data. This is also pertinent in higher education context where it is not clear whether faculty or student support teams have the necessary capabilities and capacities to interpret and act on the analyses.
Horton also raises a third issue that is also pertinent when thinking about learning analytics and that is the need that “Unvalidated findings should never be communicated to participants because they may be confusing and obfuscate any meaningful application to improve healthcare” (p. 12; emphasis added). Anecdotal evidence in learning analytics suggests that often (mostly?) that there is often not time to validate the findings of a learning analytics’ project due to the often urgent need to intervene. And secondly, when we accept that education is an open and recursive system, the next student cohort will, in all probability, be different, with different needs and characteristics.
So what do we do, in the context of learning analytics, to validate findings and ensure appropriate and ethical interventions?
Under the principle of “Data Sharing, Access, and Use” (9 November, p.45) the following elements are mentioned:
- “Certain activities should be expressly prohibited, including the sale or use of data for targeted advertising”
- There should also be “multiple tiers of data access—from open to controlled”
- “Unauthorized re-identification and re-contact of participants will be expressly prohibited.” Interestingly the 8 July version also included “…and consequences should accompany such actions.”
- The 8 July version had this problematic statement that was removed from the 9 November version “PMI cohort should maintain a link to participant identities in order to return appropriate information and to link participant data obtained from difference sources” (p. 5; emphasis added).
Excursus: In the current mist surrounding learning analytics, and the lack of and uncertainty regarding ethical oversight, higher education institutions will have to make very clear what activities would be strictly prohibited, e.g. the sale of data, or for targeted advertising (often by the providing institution)…In the context of “surveillance capitalism” (Zuboff, 2015) and the monetary value of data, we need to be clear on the exact boundaries of our governance of student data. There are increasing concerns about the implications of the Trans-Pacific Partnership on the exchange value of data.
There is also the issue of the potential of the re-identification of students which is currently not strictly addressed in learning analytics.
Horton raises the issue that “access [to data] should be permitted only where there are assurances that the recipient of the data has adequate protections in place to ensure the privacy of participants and confidentiality of their information” (p. 13). In the context and practice of learning analytics in higher education and the fact that oversight and ethical clearance are mostly left to the integrity of individuals accessing the information, what are the implications?
There is nothing out of the ordinary under the principles of ‘Data Quality and Integrity’ (in both versions). Interesting is the fact that the section dealing with ‘Security’ (in the 8 July version) was totally scrapped in the 9 November version. What makes this more puzzling is the fact that Horton suggested two pages of aspects to be addressed under ‘security’ before it was removed in the version of 9 November.
Despite concerns of finding parallels between research and practices in the fields of medicine and education, I read the PMI (both versions) with interest. While I am not sure the PMI addresses the complexities in the asymmetrical power relationship between medicine and medical practices and patients, the PMI does point to some issues that higher education can/should consider in order to move beyond seeing students as data objects and the providers of data (often without them knowing) to collaborators and participants.
Biesta, G. (2007). Why “what works” won’t work: evidence-based practice and the democratic deficit in educational research, Educational Theory, 57(1), 1–22. DOI: 10.1111/j.1741-5446.2006.00241.x.
Biesta, G. (2010). Why ‘what works’ still won’t work: from evidence-based education to value-based education, Studies in Philosophy of Education, 29, 491–503. DOI 10.1007/s11217-010-9191-x.
Birnbaum, R. (2001). Management fads in higher education. Where they come from, what they do, why they fail. San Francisco, CA: Jossey-Bass.
Carr, C. T. (2015). Spotlight on ethics: institutional review boards as systemic bullies. Journal of Higher Education Policy and Management, 37(1), 14-29.
Cooper, D. (2014, December 5). Taking pleasure in small numbers: How intimately are social media stats governing us? [Web log post]. Retrieved from http://blogs.lse.ac.uk/impactofsocialsciences/2014/12/05/taking-pleasure-in-small-numbers/
Hartley, D. (1995). The ‘McDonaldisation’of higher education: food for thought? Oxford Review of Education, 21(4), 409-423.
Kruse, A. & Pongsajapan, R. (2012). Student-centered learning analytics. CNDLS Thought Paper. 1-12. Retrieved from: https://cndls.georgetown.edu/m/documents/thoughtpaper-krusepongsajapan.pdf
Manning, C. (2012, March 14). Educational triage [Web log post]. Retrieved from http://colinmcit.blogspot.co.uk/2012/03/educational-triage.html
Murphie, A. (2014). Auditland. PORTAL Journal of Multidisciplinary International Studies, 11(2). Retrieved from https://epress.lib.uts.edu.au/journals/index.php/portal/article/view/3407/4525
Power, M. (1999).The audit society: Rituals of verification. 2nd edition. London, UK: Oxford University Press.
Power, M. (2004). Counting, control and calculation: Reflections on measuring and management. Human Relations, 57(6), 765-783.
Prinsloo, P., & Slade, S. (2014). Educational triage in open distance learning: Walking a moral tightrope. The International Review of Research in Open and Distributed Learning, 15(4), 306-331.
Prinsloo, P., & Slade, S. (2015, March). Student privacy self-management: implications for learning analytics. In Proceedings of the Fifth International Conference on Learning Analytics and Knowledge (pp. 83-92). ACM.
Slade, S. & Prinsloo, P. (2013). Learning analytics ethical issues and dilemmas. American Behavioral Scientist, 57(10), 1510-1529.
Willis, J.E., Slade, S., & Prinsloo, P. (in press). Ethical oversight of student data in learning analytics: a typology derived from a cross-continental, cross-institutional perspective. Submitted to a special issue of ETR&D titled “Exploring the Relationship of Ethics in Design and Learning Analytics: Implications for the Field of Instructional Design and Technology.”
Zuboff, S. (2015). Big other: surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 30(1), 75-89.