There’s something potentially of real value in developing and using wellbeing analytics, but there are also some challenges that need careful consideration. If the first set is defining wellbeing, the second is ‘metrics’. In essence, what’s the meaning that we can ascribe to the masses of data that we’ve got? Most importantly, what’s the association between a data point today and a particular behaviour or risk in the future?
I’ve written about Clow’s (2012) learning analytics model before. It’s simultaneously really useful for conceptualising learning analytics, but also potentially a bit problematic because it rather implies that it’s all a straightforwards process. Clow suggests that there are four stages to the use of learning analytics:
- Learners – individual students going about their every day business of learning (or not). Importantly, this is both the start point and the end point of the cycle. The cycle starts with the student’s ongoing daily activity, but we’re also interested in any changes that happen, particularly in the light of interventions carried out.
- Data – this is simply the traces left behind by students in university systems. This is usually automatically generated, but wellbeing analytics are prompting discussions about self-reported, or even tutor-reported data. This may not be ‘pure’ analytics, but likely to be a feature in wellbeing analytics work.
- Metrics – data only has meaning when we ascribe it. In my more normal work (perhaps best described as ‘engagement’ or ‘student success’ analytics) we tend to look at the association between data points and academic withdrawal. In our work, students with high engagement are highly likely to pass the year and highly likely to achieve grades, those with ‘low’ engagement vice versa. What’s the association we’re looking for with wellbeing analytics?
- Intervention – I’d always argue that what distinguishes analytics from management information is that there’s an assumption that an intervention will follow if needed with analytics work. What’s the intervention for wellbeing analytics? Could it be student-initiated, or does it need to start from the institution? One hypothesis about wellbeing analytics is that early intervention will prevent more serious intervention later. This feels right, but I do wonder if the effect of heuristics means that students might only react when things have got sufficiently serious for them.
So what metrics should we be looking for?
There are two distinct metrics that need to be considered.
- We need a solid unambiguous point in time where we can say ‘this student has a wellbeing issue’.
- We need a second metric that acts as a strong indicator that at some point in the foreseeable future, ‘this person is likely to meet criterion 1’.
When does someone have a wellbeing ‘problem’?
In order to be able to predict those students who are likely to need support in the future, we need to have an agreed definition of what constitutes a wellbeing problem. Effectively, we need to be able to answer the question, ‘what’s the first day that a student has sufficient issues with their wellbeing that it has become a problem?’. If I’m unhappy today, do I have a wellbeing problem? What about if I’m feeling anxious about a presentation? Are these wellbeing problems, or does it need to be disabling? Does it only become a problem when a student stops attending classes, or fails to submit, or withdraws? I’m trapped by own thinking a little here. I’m aware that I’m looking for something binary (‘yes there’s a problem/ ‘no there’s not a problem’), it may be the answer is far more fluid and nuanced, but we’re talking about computers and systems here. Wherever possible, this needs to be kept simple.
- The first option being discussed is whether or not students with wellbeing problems approach the institutional support services, particularly mental health services. But if more students use student support services, is that evidence that more students have wellbeing problems, or that more students know about wellbeing support? And I don’t want to belittle the problem, but if we tell new students that university is going to be difficult and they’ll need to access student support services to cope, we’re creating a paradigm with a self-fulfilling element. Finally, it’s possible that students receive support from other sources than Student Support Services, for example from peers or family members – they may have wellbeing needs, but aren’t picked up by this metric.
- The second possibility is that we use self-reporting as a tool. This has the advantage of using students’ own voices and ought to be extremely authentic. There are a few tools that could be used to self-report mental health including the Warwick-Edinburgh Mental Wellbeing Scales or the World Health Organisation’s Five Well-being Index. Completing these survey tools could be a really valuable nudge for a student to access support, is potentially healthy to do anyway but there are some issues with this approach. Getting students to complete surveys is difficult, getting students who are more at risk of wellbeing problems or mental ill health may be even more so. In a recent study, colleagues at Northumbria University persuaded 56% of students to complete the WHO survey in semester 1, but only 12% on two separate occasions in semester 2. Getting that many students to complete the survey is a significant achievement, but from an analytics perspective, I’m not sure it’s enough.
- Clearly we could simply add wellbeing analytics onto existing engagement analytics. As I understand it, most learning analytics systems can predict reasonably well students who are most at risk of early departure before they withdraw. Our own internal survey work suggests that students with low engagement are less satisfied with their experience and are more stressed or anxious about their studies. Moreover, we found that students with mental health conditions were more likely to have significant gaps in their day-to-day engagement compared to their peers. In our system, they were twice as likely to generate no-engagement alerts (a strong indicator that they are at risk of early departure).
- Withdrawals or low grades might be reasonable proxies for wellbeing and anxiety, but clearly they’re not perfect and equating one with the other feels a little problematic. It may be possible to take more extreme measures, for example using students who have withdrawn from their studies and have stated that they withdrew due to mental health problems. Or, it may be possible to use the data of those students who have, tragically, completed suicide. However, with both data sources there may simply not be enough data to use.
- [edit 7th July, 2023] I realise that I’ve missed an important data point: mitigating circumstances/ exceptional circumstances. These are given different names in different institutions, but essentially the rules around asking for extensions, or deferring submissions due to problems such as illness. These are unambiguous points in time where a student has told us that they have an issue that they require some additional support, care or time.
- One final option might be to use background data. I’m not certain which groups are most in need of wellbeing support, but if we could identify that one racial group, entry route or class is more at risk, it’s one possible route to offering support. I’m innately wary of this approach, primarily due to the issues associated with stereotyping both for the individual student and how we, as a community, start to view groups of students.
We have lots of potential sources that we can use, but they have issues.
What predictors can we use to identify potential wellbeing problems before they occur?
Just knowing what the agreed indicator of a wellbeing problem is not enough. We also need an agreed indicator that can give us a reliable enough indication that TODAY this student is more likely to be at risk of a wellbeing problem TOMORROW, or at an agreed date in the future.
In our work on engagement analytics, low or patchy engagement is a strong indicator that a student is at risk of not completing the year and/ or achieving a lower grade. The strongest indicator of this pattern is when we average together data for the whole year (but obviously it’s far too late to act by then), but even after just the first week of term, there are clear indications in place. It’s not perfect (and never will be), but it’s good enough. We can say TODAY low engagement is an indicator that before the end of the year a student is more at risk of leaving early or failing their course than their peers, particularly peers with high engagement.
A second consideration is that the data needs to be quick and easy to generate. If it takes a week to put data onto a system, it’s not likely to be particularly useful (although of course, there may be lots of good reasons for logging this data however slowly).
Finally, there’s a huge consideration about false positives. I can predict 100% of the students who will have wellbeing problems. They will be students who have enrolled at my institution. But whilst my prediction will correctly pick out those students at risk, it’s likely to also pick out thousands of students who are not at risk of wellbeing issues. If the purpose is to trigger an institutional intervention, this is a huge potential problem for staff capacity, if it’s intended to prompt a communication then it’s less of an issue. Although we should be careful even here – are there risks of nudging students to believing that they have a problem?
What data can we use to indicate that there may be a wellbeing problem in the future?
- I clearly have a cognitive bias towards simply using existing engagement data (and am really happy to be challenged on this). There is an association between low and erratic engagement and mental health conditions and students with low engagement are less satisfied and more stressed than their peers. Obviously, this could be combined with other data points such as surveys, or we could pick out specific individual elements (for example ‘attendance’ or ‘non-submission of coursework’) if they are appropriate.
- There is something appealing about the use of surveys, but I don’t think that they can be the whole system. One route might be for students to complete surveys and to provide responsive support based on the answers that they give, but it can’t be the whole picture.
- Fixed characteristics such as background or entry qualifications could also work as the prompt to act, but this feels a blunt tool. An alternative to thinking about fixed characteristics would be to think about problems at fixed points in time, one of my very excellent colleagues pointed out that we may wish to treat all alerts around assessment deadlines differently due to the additional pressure students face at that time. Similarly surveys in our original HERE project work found that students who considered leaving stated that the reasons for considering doing so early in the year related to social, not academic anxieties.
- Instead of thinking about whole systems and whole patterns of student behaviour, it may be possible to instead use smaller triggers. For example, logging on to a page about mental health support on the university’s website is probably a good indicator that someone has a concern. Clearly, there are some ethical considerations of student privacy, potentially a very short time frame between the query and a wellbeing problem, and there will be issues with incomplete data (what happens if a student logs on to The Samaritans website instead?), but this may be one way to identify students at risk in advance.
- [edit, 7th July 2023] Once again, the mitigating circumstances rules might work as an effective trigger for for action, although arguably using them as a trigger is too late.
Putting it all together
I’m writing this as a problem-solving exercise. I need to be able to understand what metrics we could use in order to work with data and IT experts to actually build systems. I’m a little conscious that there’s not an obvious and simple route here. I can see there are lots of ways that we could use distinct data as indicators that a student has a wellbeing issue and that act as an early warning of one, but I think I need a bit of reflecting time. My instinct right now is that we consider wellbeing analytics as an additional set of filters to sit on top of existing engagement analytics. This might be the wrong approach and I’d welcome thoughts from proper experts.
I’m also conscious that I’m overthinking this. Any system, however imperfect, that reaches out to students at risk is likely to be a positive thing.
I’ll come back to this and consider Clow’s ‘Intervention’ stage next.
Pingback: The problem with wellbeing analytics (& what we need to do to fix it (part 1)) – Living Learning Analytics Blog