Your online course has had considerable time, energy and money invested in it so it’s natural (and savvy) to want to know whether it actually does what it’s designed to do.
In the pursuit of user data, there can be a tendency to make course evaluation compulsory for learners but not only may this not give accurate data but it may damage your relationship with your learners.
...you’ve just finished an online course. You’ve chipped away at it for a couple of hours a week for the last month and you’ve finally got to that last screen. Relief. Hooray. And then up pops a “Please complete our course evaluation” screen.
Expectations of completion have been shattered. You’ve learned a bit through the course but you feel like there’s nothing you need to feedback to the administrators. You’re resentful that your time is being taken up by something that’s not essential to your learning.
You click through the questions with “yeah whatever” answers and finish your learning experience feeling a tad used.
Seeing as we tend to remember how things end, is this the lasting memory you want your learners to have of your course?
Course evaluation helps us determine whether an online courses is doing what it’s designed to do – helping learners learn – but it doesn’t need to be compulsory.
In fact, there are much better ways to integrate evaluation so that you get happy learners at completion; learners who want to voluntarily give you feedback.
We don’t tolerate coercion in other areas of work or life so it seems inappropriate to make online course evaluation compulsory.
Studies have found that students who participate in voluntary evaluation provide more accurate data. (1) All that 100% evaluation compliance metric might be doing is clogging up your spreadsheet with dodgy data.
So the question is: Do you want quality or quantity? How could you get both?
How could you design your course so that metrics of engagement, participation and satisfaction as the learner progresses?
Be clear about:
How could you design your evaluation so that they can better understand themselves as learners?
The literature suggests that evaluation be learner-focused - rather than an exercise in metrics for the organisation (2).
How could you design your questions so that the learner continues to learn?
Along with valuing your learners’ opinions and experience, you also want to demonstrate that you value their time. So keep your evaluation short and your questions transparent.
What do you really need to know to make the course better?
Rather than tack your evaluation on the end of your course, when your learners have competing priorities, integrate it into a part of the course when the learner is fully engaged.
When is the most useful time for learners to be reflecting on the course?
Remember the difference between knowledge acquisition and learning.
It is of little benefit to your learners to be able to spruke the five indicators of firefly infestation if they can’t apply it in their work or life.
So design your evaluation so that learners complete their course confident that their learning will be applied.
How will your learners know the course has been relevant and will help them in their work and life?
By making your evaluation learner-focused and voluntary, you will not only design better courses but your learners will also come away feeling more satisfied with the course and that they’ve contributed in a meaningful way.
Jin Liu et al (2019) Students’ learning outcomes and peer rating accuracy in compulsory and voluntary online peer assessment. Assessment & Evaluation in Higher Education 44(6): 835-847.
Bahous et al (2018) Voluntary vs compulsory student evaluation of clerkships: effect on validity and potential bias. BMC Medical Education. 18(9).
2. Edstrom, K. (2008) Doing course evaluation as if learning matters most. Higher Education Research & Development. 27(2): 95-106.