In 2013 we worked closely with the (then) tech start-up Solutionpath to build the pilot version of the Student Dashboard. Over a few months in the summer we went through a developmental process.
- Solutionpath built their proof-of-concept algorithm
- We shared multiple years of anonymised data and they used this data to see if the algorithm could spot students most likely to be at risk of early departure.
- We agreed that the algorithm was accurate and timely enough for us to conduct a pilot study and agreed to progress.
However, there was still a huge amount of development to carry out.
- Who was the tool for?
- What information did we need to show?
- What was the best way to display student information?
During this period we made several important decisions. Firstly we agreed that the tool should be both staff and student-facing. This meant that if students were to be users we needed more motivational language so we agreed to flip the metric from being a measure from risk to engagement. In the first prototype ‘high’ meant ‘highly at risk of dropping out’, it now means ‘highly engaged’. I’m going to personally lay claim to that one.
But more importantly we had long discussions about the ethics behind the algorithm. We kept returning to one issue: to what extent should we use background characteristics in the algorithm?
There’s a good argument for including this data. We know from studies by HEFCE and the Equality Challenge Unit that the likelihood of progression and success is profoundly influenced by a wide range of factors:
- Socio-economic background (widening participation students are less likely to progress or graduate than their more affluent peers)
- Gender (males do worse in every measure apart from graduate employment)
- Ethnicity (white students tend to perform better than their BME peers)
- Age (younger students tend to perform better)
- Nationality (UK students perform better than their international peers)
- Disability (students without disabilities tend to have better outcomes than their disabled peers)
- Finally entry qualifications make a big difference (students entering university with the highest entry qualifications perform better than those entering with lower, or vocational qualifications)
Logically therefore it’s a no brainer, surely you include background characteristics in the algorithm?
Surely?
Well no. No not really.
Imagine the scenario. Two students attend the same classes, log on to the VLE precisely the same amount, use the library exactly the same amount, submit their coursework on time and are on campus for exactly the same amount of time.
But one is female and one is male.
If we use the student’s background in the algorithm, the male student will have a lower engagement just because they are male. Which is uncomfortable, but replace male/female with BME/white…
There were two reasons to take this approach.
Firstly, background characteristics are a blunt tool, they don’t necessarily reflect the individual student. Yes on average students from poor backgrounds do less well, but not all of them. By focusing on background, not engagement, it appears that we are fixing our students’ socio-economic disadvantage into the system, rather like unconscious bias.
Secondly, students can’t change their background, but they can change what they do whilst at university. They can change their engagement. We wanted a tool that students could use themselves. We needed a tool that they’d find motivational.
In 2013, we therefore agreed that we would only focus on student engagement. The system made no reference to background. I’d like to claim that this was my wisdom, but in truth it came through discussion and debate amongst a group of motivated colleagues including Angie Pears, Sarah Lawther, Mike Kerrigan, Ann Liggett, Melanie Currie, Steve Wheelhouse, Mike Day and Jacqui Tyler. Solutionpath had the brilliance to make this happen, but this insight was developed by the end users.
In January 2015 we analysed progression data. The strongest predictor for success were not background characteristics, or entry qualifications. The most important predictor of early departure was average engagement with the course. Yes differentials still exist, highly engaged female students do better than highly engaged male students, but the difference between high average engagement and low average engagement absolute dwarfs background characteristics.
Pingback: The HERE Project (2008 – 2011) – Living Learning Analytics Blog
Pingback: Are We Building Algorithms of Oppression? – Living Learning Analytics Blog
Pingback: What is Student Engagement? – Living Learning Analytics Blog
Pingback: Five questions about learning analytics ethics – Living Learning Analytics Blog
Pingback: Diagnosing student ‘risk’: categorising learning analytics to prevent early withdrawal – Living Learning Analytics Blog
Pingback: The effectiveness of learning analytics for identifying at-risk students in higher education – an extra titbit – Living Learning Analytics Blog