One of the benefits of learning analytics is the capacity to not just automatically analyse data, but also to automate some or all of the next steps. I’m primarily interested in learning analytics for student success, for me the automatic alert when a student is potentially at risk of failure is key. Whether red flags on a dashboard or emails to a tutor/adviser, one appeal of learning analytics is that it can spot patterns and push out warnings long before a critical incident such as a failed assignment has occurred.
But as with all things Learning Analytics, this isn’t really about technology, it’s about people and systems.
Before going any further, my very excellent colleague Dr Rebecca Siddle wrote two case studies about alerts for our ABLE Project website. One on ‘no engagement‘ alerts and one on a failed project to set up ‘non-submission‘ alerts.
What are the fundamentals associated with effective alerting?
- A meaningful trigger
One of the first challenges is the validity of the alert. Is there a statistical association between the alert and the trigger or is it looser based on institutional policy or PSRB requirement. On one level it doesn’t matter, but if it’s to be used as the basis for an intervention, students may object to what they see as ‘arbitrary’ interventions. I’d argue that it’s easier to explain/ defend if there’s a meaningful trigger for the alert.
2. The sweet spot between prediction and usability
The truth is, that there’s probably a small effect for any non-attendance, and certainly will be for non-submitted or failed coursework. However there’s the second problem. If you raise an alert for every missed class or deviation from an expected norm, you potentially swamp users (students or staff) and there’s almost certainly a law of diminishing returns. I’ve dipped in and out of the excellent Duolingo app often enough to know that the friendly reminders soon become ineffective.
Furthermore, in our system we send alerts to tutors not students. Personal tutors get alerts after 14 days. However, if we drop that alert period to 10 or even 7 days we are likely to see a very significant rise in the number of alerts and a drop in the predictive strength of each alert. We are dealing with time-pressured staff, if we overwhelm them with lower value alerts they may not be able to give the necessary attention to students who really need it.
I’ve always understood that attendance declines over time: students start with good intentions, but over time they prioritise completing coursework over attending classes, have colds, hangovers, decide that they don’t enjoy particular lectures etc. Chris Keenan at Bournemouth conducted some (sadly unpublished) research. She found that any non-attendance in the first fortnight was a good indication of risk. I’d argue that missed classes in the final term is never great, but I’m pretty confident is statistically less of a concern.
Perhaps alerts need to be more sensitive/have shorter deadlines in the first term rather than the final.
You’re a tutor, you have an alert in your inbox, now what? I still feel that this is the biggest challenge. The textbook “Effective Personal Tutoring in Higher Education” offers good advice including a chapter explaining how to conduct a ‘back on track’ interview interview, but contains relatively little evidence on the efficacy of tutorial interventions. I think this is a problem across the sector. Students who answers surveys tell us that they value the relationship with their tutor, but this is still an aspect of our work that I feel is relatively under researched.
Between 2018 and 2021, we will be exploring these themes as part of the OfLA Erasmus+ project.