One of my first jobs after graduating was training students in transferrable skills. I read an enormous amount of training theory, text books on time management, systems-thinking, NLP, public-speaking etc. Certain keystone ideas stuck. One was of course David A Kolb’s (1984) experiential learning cycle. I can remember explaining it by modelling the process of learning how to open one of the doors to the training room. I’m certain that I’ll have used the simple version: act – reflect – make rules – plan rather than the fuller, more-sophisticated version. I’ve used it a lot with students and there’s often an ‘of course!’ revelation.
Learning cycles, feedback loops and systems thinking are all ideas that inherently feel right. I think it’s partly because there are some rock solid examples of these systems working (from the very simple e.g. the way a thermostat works, to the very complex, for example the balance of predators/ prey in an ecosystem etc.). There’s something all very logical about them. What learner wouldn’t want to know their goal and to know how far they’ve missed it?
I’ve spent a lot of time thinking about learning cycles for a learning analytics paper we’re working on. I’ve looked at Kolb, Graham Gibbs’s cycle and, the daddy of them all, Argyris and Schon’s single and double loop learning, but I’ve mostly been thinking about Doug Clow’s (2012) learning analytics cycle. Learning analytics and learning cycles are natural bedfellows. Learning analytics over time should provide data that can be measured at different points in time. In theory, we ought to be able to deliver and evaluate interventions using the technology.
Clow’s model feels right. We start with learners, they create data through their interactions with the programme and institution. However, the data only has meaning when we attach metrics to it when we analyse it against a model or through an algorithm. Finally we have the intervention, for example automated feedback, contact from a tutor, learning developers or student support services. In theory, this leads to a change in learner behaviour, new data, new ratings in our analytics system etc.
Job done, we can all go home.
So of course, this is just a framework. All of the stages are complicated. Capturing Data requires cables, servers, computer programmes and programmers (I probably should have put that the other way around, programmers are lovely, hard-working people). Metrics take time and need careful consideration. However, I do think personally Intervention > Learner is the tricky step. So if we assume that the agent of change is the institution, not the student, that step requires at least the following requirements:
- Comprehensible early warnings through a dashboard or other early warning system (automated report, emailed alert, etc.)
- Agreed policy & sense of ownership about acting
- Agreed strategy dependant upon the nature of the early warning (e.g. missed classes probably requires a different approach compared to a failed assessment)
- Time and means to communicate to students
- Time and means to communicate to the students who didn’t respond to the first communication
- Sufficiently skilled and confident staff to engage students in meaningful dialogue, or some form of pro-forma script
- Space and time for any interventions (real or virtual) to take place
- System for goal-setting or making referrals to learning developers, student support services etc.
I like the idea of learning cycles, I do.
However, I also think that their inherent ‘rightness’ is a problem. How many senior managers have looked at models like this and thought “that’s it, that’s the answer, just get me one of those” (actually probably none – this is the dialogue from a really bad movie, but you get my point).
There’s another issue, learning analytics looks technical. It’s all about computers, data, charts and stuff. It encourages us to think like engineers. That’s all very well, but learners are complicated, contradictory and often confused, we need to think like psychologists.
* To be abslutely fair to Doug, he’s never claiming to have the answers as to how learning analytics changes behaviour, he’s just offering a useful model to conceptualise the process.