In summer 2020, in response to the first national COVID-19 lockdown, we used learning analytics data to manage a calling campaign to students who appeared to have been most disadvantaged by the transition to wholly online learning. In essence, we pulled together a team of volunteers to contact students who appeared to have disengaged from their studies. Over the first few weeks of the summer term, the team made over 5,700 calls and spoke to 2,300 students. Most conversations were straightforward, offering reassurance and support only, but 780 students requested further information particularly about academic, financial and wellbeing matters.
I’m not quite sure how it happened, but we were asked to contribute a case study for the Office for Students report: Gravity assist – propelling higher education towards a brighter future. We contribute a few hundred words about the way that we used learning analytics to support students. Lack of space meant that the student quotes were cut, so here’s what you’re missing.
“Despite everything happening in the world I wasn’t forgotten about or abandoned by Uni”
(Clearly the quote’s for my self-promotional benefits rather than for mass enlightenment)
What’s the right way to write such extensive reports?
We are entering the final year of the Erasmus+ funded OfLA project. In the last week, I have been reviewing two of the NTU case studies as we plan for the final set of outputs at the end of the project (summer 2021). It’s been interesting to re-read the reports and reflect back on some of our learning analytics work.
- Interviews with students about automatically-generated alerts
- Review of a pilot study trialling different communication channels in partnership with one of our academic schools
Both reports are relatively brief (15 pages or so), but cover a lot of ground and need to provide both extensive background and easily summarised findings. I’ve come back to these reports from the perspective of the external reader looking to mine the case studies for goodies. That may be evidence of my impatience and shortened attention span during lockdown, but it’s interesting that the case studies need to fulfill very different objectives:
- Project author – how do I demonstrate sufficient rigour, how do I share the interesting ideas that arose from the research?
- Project lead – how do I prove that we did stuff to meet the funder requirements: “look – we have words, lots of words, we actually did a lot here”
- Readers – where’s the easy to implement answers? Where’s the pithy memorable anecdotes? Where’s the irrefutable umabiguous evidence I can use in my workplace?
I think that the problem is particularly stark where we use qualitative data. I’m conscious that the form almost encourages two types of response:
- “A little bit of this – a little bit of that” – in an attempt to be rigorous, we end up with reports that don’t have conclusions – just a range of opinions.
- ‘No shit Sherlock’ – where we just use quotes to merely point out the bleeding obvious (I’m now doubly unclear about why I’ve included the student quote at the top of the post).
I’m obviously just demonstrating my lack of mixed method research chops, but I wonder to what extent the answer is about having a stronger editorial voice in our final OfLA project reports and leave more open analysis in our earlier reports.
‘Gravity assist’ has one heck of a challenge, it has seven sections with good examples of how the sector has responded to coronavirus. There is good content in this report, but it’s perhaps one to dip into.

And because I sort of use this blog as a filing cabinet, I’ve saved a copy of the report (Gravity-assist-DTL-finalforweb) so I can find it in the future.