After deploying training to thousands of learners, you are wondering if it was worth the effort: Do these courses lead to a real increase in skills among your learners ?
You have therefore decided to assess the various dimensions of the pedagogical effectiveness of these courses. Be careful though: several traps stand in your way and can lead you to the wrong conclusions.
To outsmart them, we suggest 4 precepts that will guide your assessment process and allow you to obtain reliable information.
Your training is over: you have given an evaluation to all your learners. You realize that the average success rate of this test is 80%. A trainer who is a bit too enthusiastic will soon conclude that the training is a great success.
Is that really the case? Those who have read carefully our article on educational effectiveness will no doubt have noticed the trap. In fact, How to estimate the progress from your learners to your training... without knowing their starting point?
To solve this problem, you can ask your learners to take a positioning test : a test representative of what they should be able to do at the end of the training. So, if the objective of your training is to know how to cook a soufflé, you can ask the learners to make a soufflé. before and after the training.
In practice, the positioning test is often seen as a waste of time or as a threat for the learner. However, this information is essential to see if the proposed training leads to a real increase in skills. In addition, offer a assessment at the beginning of apprenticeship (1) is already an opportunity to learn.
You measured the performance of your learners before and after your training. You notice that their average success rate has increased from 20% to 80%. Given the magnitude of the difference, it would be natural to conclude that the training is highly effective. However, it would be premature at this stage: we do not know what would have happened if no training had been taken on this subject.
How do I know what my education is worth without a point of comparison? The human being is a very good learner, capable of progressing with the worst possible medium.. Remember: you've probably learned at least some of the things from your most boring courses.
It is therefore necessary to systematically compare the increase in competence of your learners with a control group. This person may have received a variant of your training if you are looking to validate your choices, or a training that does not involve the same skills.
There are two reasons why it is useful to have a control group:
You must therefore assess the effect of your training. all other things being equal. How to do it concretely? You don't have to be a lab rat to add a control group to your courses. For example, all you need to do is deploy two different formations to two different groups.
Let's say one group of learners is going to learn how to cook a cheese soufflé; another group is going to learn how to cook a chocolate pudding. First beforehand, then at the end of the training, each group will be tested on their ability to cook a cheese soufflé. AND a chocolate cake. So, if the group that has been trained to cook a chocolate cake cooks on average better than the group that has been trained to cook a croissant, you will know that it is thanks to the training.
With precept No. 2 in mind, you decide to compare the increase in competence of your learners with that of a group of “control” learners. You notice that those who received training A have an average success rate that has increased from 20% to 80%, while those who received training B have increased from 20% to 50%. Given the magnitude of the difference, this time it is indisputable: training A is more effective than training B.
Be careful though: when comparing averages, big differences may mean nothing. Let's look at the graph below which represents two possible distributions of learners according to their performance on evaluations.
The pink curve corresponds to the learners who took training A, the blue curve to the learners who took training B. The higher the curve, the higher the number of learners who had this success rate.
In both cases, the average success rate of learners in the final assessment after training A is 50% and that in training B is 80%.
However, we can notice that the “width” of the curves is not the same from one graph to another. However, wider curves may overlap too much. When this is the case, it means that a significant number of learners could belong to both Group A and Group B (purple area on the graph). Therefore, the difference in mean observed will probably be due to chance. It will therefore not be considered as Significant (2).
You measured the performance of your learners, separated them into two groups and used a statistical test to see if the difference between their performance was significant. Your efforts are rewarded: it is indeed the case. Given the observed progression - from 20 to 80% for training A, from 20 to 50% for training B - you should logically be able to conclude that training A is 2 times more effective than training B.
... Not so fast. The results you have obtained are only valid for:
For example, your learners have a success rate before training of 20%: this could mean that your training is particularly suitable for novices. If you give the same training to learners with an initial success rate of 50%, it is possible that the effect is reversed and that formation B is now the most suitable. The same risk exists if you compare training to a course other than training B. In any case, caution is needed when it comes to the extent of your results.
To really assess the effectiveness of your courses, it is in your best interest to apply the following precepts:
These precepts are easier to apply than they seem, and will allow you to ensure that your training has a demonstrable effect. Faced with an increasing number of skills development needs and increasingly demanding learners who return to expectations, the pedagogical effectiveness of training courses is no longer a luxury but a necessity.
(1) See in particular Richland, L.E., Kornell, N., & Kao, L.S. (2009). The pretesting effect: Do unsuccessful retrieval attempts enhance learning?.Journal of Experimental Psychology: Applied,15(3), 243.
(2) How do I know if a curve is “too wide”? Statistical tests exist to tell you whether the difference between your two groups of learners is due to chance or to a difference caused by your training, or in other words if this difference is significant. You will find a demonstrator on the site”Evan's Awesome A/B Tools“. If you are interested and want to go further, ANOVA training courses exist to make the best use of them.
Make an appointment directly with our eLearning experts for a demo or simply more information.
Best practices
Best practices