In the first term of my first year, I skipped a seminar on DH Lawrence. My tutor, Dr Mara Kalnins, pulled me up at the start of the next seminar and politely expressed her disappointment. She also asked me to make up what I’d missed by reading about Carl Jung. I’m not sure that what I read was actually the right thing (I didn’t get a very good mark in the relevant coursework), but what I did turned into one of those foundational bits of knowledge I possess.
The point to the story (apart from revealing how terrible a student I was), is that my tutor spotted I wasn’t there and intervened.
We’ve been thinking about this process for our OfLA project. In our project jargon:
- The trigger was my absence
- The communication was the polite word
- The intervention was I think also the polite word or my misdirected reading
I’d argue that you’d have to be psychopathically libertarian to think there’s anything wrong with this story.
My starting question is therefore:
Is it ethical for institutions to use learning analytics to intervene with students?
I’m going to just say ‘yes’.
Institutions, particularly individual teaching staff have always intervened to a lesser or greater extent. Learning analytics enables institutions to systematise the support. There’s an argument touted that in a perfect system with close personal relationships between tutor and student there would be no need for such a system. I’m sympathetic to this argument, but believe that analytics can provide valuable information early, that tutors may not be able to spot through their normal range of interactions.
At the start of our pilot, we also discussed the problem from the opposite perspective. If we have data on students and could see that they were at risk, did we have an ethical responsibility to intervene?
If I think learning analytics are, in principle, acceptable, the next question is:
What kinds of interventions are appropriate?
At NTU we are clear, learning analytics is used to support student success. Because we are most concerned about students failing, it is essentially an early warning system. We should also do more with students who are not at risk of failing, but with finite resources, we have started in the place with the greatest need.
We expect tutors to use the learning analytics tool to spot students at risk by checking in periodically and also intervening if a no-engagement alert is raised. These ought to be relatively non-controversial interventions, students are offered help, but can choose whether or not to actually take it up. There are no consequences of ignoring the advice (apart from the higher risk of failure) and the student remains in charge of their own learning.
I’d argue that using learning analytics as part of an assessment regime is more problematic. I’m sure that there will be lots of programmes where students are required to prove that they have completed resource use or attended above a threshold, both of which are akin to learning analytics, but I have some reservations as it implies only one valid way to study.
I think far more controversial is:
What data sources are appropriate for triggering interventions?
I’ve written before that we made a series of decisions about only using student engagement data, not background characteristics in learning analytics algorithms. I still feel that this is the right decision. I’m uncomfortable using background in algorithms. Treating every student within a group in the same way feels profoundly wrong headed. We are dealing with individuals, not whole cohorts. For me, it’s far preferable to use students’ engagement with their course.
We use engagement: students’ use of resources, attendance, submission of coursework etc. Perhaps the most important ethical issue with this particular approach is to remember that the data is only a reflection of activity, it’s unlikely to indicate motivation, personal barriers etc. There is still a role for a skilled professional to engage the learner to help them overcome potential obstacles.
When should a system generate a warning?
I think that time is a real issue for learning analytics. On one level, it’s an entirely operational matter, but I can’t help but feel that time acts as an effect multiplier with ethical implications. An intervention that may be appropriate at one point in time may be ethically dubious at another.
There’s a tension between the timing of a data-based diagnosis and actionability. Learning analytics is likely to get more and more accurate as time progresses, but who needs an alert on the last day of the academic year? More of an ethical issue may be early triggers. In theory, it ought to be possible to generate risk-based alerts very early in the academic year, particularly if demographic data is used. But knowing at the very start of the year that a student is highly at risk of failing has its own issues. Could it demotivate students, have we actually the resources to support them, are there any issues about free will?
What is the consequence of using learning analytics?
I think the last issue is therefore, what’s the consequence of using learning analytics data? Whilst in theory, presenting data enables staff and students to act to change engagement, there’s also a risk that it may lead to stereotyping, from either the staff member, or the student themselves. There’s also an issue that raising student awareness that they are doing less than their peers, or be at risk of failing may not be a spur to increase engagement, but lead to stress and anxiety.
An ethical consideration is that learning analytics will generate false positives: alerts and warnings for students who are not actually at risk. Even if your system is amazing and identifies students who are 90% likely to fail, one in ten will still progress.
One final problem of learning analytics is that the focus is on the student. I’d argue that this is an appropriate lens to target one-to-one support, but it does mean that our focus is on the learner, where the actual problem may be the quality of teaching, provision of resources or many other failings on the institution. These systematic problems may come to light through interventions, but the focus still lies with student deficit.
And no mention of GDPR.
Pingback: Diagnosing student ‘risk’: categorising learning analytics to prevent early withdrawal – Living Learning Analytics Blog
Pingback: Using learning analytics in personal tutorials: breaking into students’ consciousness – Living Learning Analytics Blog