Educational effectiveness: measuring it, yes, but how?

Demandez une demo personnalisée de notre LMS

Découvrez la plateforme tout-en-un Didask dans une démo personnalisée et gratuite réalisée par un expert.

Prendre rdv avec un expert

That's it: you decided to measure your pedagogical effectiveness.

After identifying the dimensions that will be useful to you in the article What does pedagogical effectiveness mean? and to be informed about the procedure to follow (Educational effectiveness: beware of misinterpretations!), you're ready to get your data talking. Yes, but... what data are we talking about? Faced with the profusion of information that you can obtain about your learners, it is legitimate to feel a bit lost. What do you start with? This article will help you get a clearer picture of what certain types of data tell you (and don't tell you), and how to choose the metrics that best fit your needs.

1) Subjective data: so I say I know?

A common reflex is to ask the learner directly what he/she thinks of the training he/she has just received. Was it useful to him? Does she feel that she has made progress? After all, if the training was effective, he/she should feel it. Moreover, this measure is particularly easy to obtain: all you have to do is administer a simple questionnaire at the end of the training.

In reality, it is not that simple: many factors influence the learner when he or she must judge the effectiveness of a training course. First of all, we are not all equal when it comes to estimating our real state of knowledge: it depends on our metacognition, that is to say our ability to accurately assess our cognitive state and to adjust it to reach the goal we have set for ourselves (see our article Learning to learn in order to be better trained about metacognition and its role in learning). However, in terms of metacognition, some are very efficient... others less so. So, the Quality of the lifts What your learners will do to you will be very varies according to individuals.

In addition, other factors may come into play. For example, if the trainer was particularly fun, the learners will have experienced a positive feeling during the training, that they can To be confused with the feeling of having learned. By believing that you measure educational effectiveness, you will actually measure the sympathy that the person in charge of the training inspires.

If these measures are so imperfect, should subjective data be completely dispensed with? We won't go that far. This data is still useful, but you have to consider it for what it is: information about what the learner thinks they have learned, not about what they have actually learned. It does not allow you to assess your pedagogical effectiveness as such, but gives you other information that will shed light on it: for example, it may be interesting to see how the gap between what learners think they have learned and their increase in objective competence after completing various courses evolves.

In addition, subjective data is an excellent way to estimate the level of learner satisfaction, which can be an objective in itself in your assessment process.

In short:

  • subjective data is easy to obtain from your learners;
  • Do not confuse what your learners think of their progress with their real progress;
  • subjective data are not indicators of the pedagogical effectiveness of your training;
  • they give other information that can complete your evaluation of effectiveness: feeling of progress, satisfaction, etc.
Découvrir la solution Didask

2) Behavioral data: I do so I know

Since subjective data is not enough to assess your educational effectiveness, you need to use a different type of data. The ideal is to use data called”behavioral“, in other words, they measure the behavior of your learner with a certain reliability.  Rate of correct answers to a quiz, response time, number of answer changes... This data is easy to get, especially in a digital environment. However, they are rarely exploited, even though They are the ones who will best inform you about your pedagogical effectiveness..

When your learner answers a question incorrectly, it obviously depends on several factors... but mainly on their level of knowledge. The more questions and choices you can choose from, the more you will have a relatively accurate estimate of its competence. As long as you offer them the appropriate positioning tests beforehand, you will be able to estimate the level of competence of your learners before and after the training, and compare it with other courses (as we detailed in the previous article in our series on educational effectiveness).

To go further, you can also look at other behavioral data of your users, such as response time. Did they become faster as the application exercises were repeated (Testing effect - Practicing is the key to success)? If yes, it is also information that can tell you about the quality of your training.

So would behavioral data be the Holy Grail of Learning analytics ? It all depends on how you got them. For example, if you offer evaluations that are too easy to your learners, there is no doubt that you will achieve extremely high performance rates. Will they then be indicative of the effectiveness of your training... or simple marker of the ease of your evaluations ? It is therefore appropriate to offer evaluations whose difficulty varies from very easy to very difficult.

Likewise, if you measure the performance of your learners immediately after the training, it won't give you no information on their long-term memory. So remember to offer an evaluation to your learners when you want it to have produced an effect: 3 or 6 months later for example.

In short:

  • behavioral data is also easy to obtain, or even already available to you;
  • they are less biased than subjective data;
  • their quality depends on how you got them.
Mesurer l'efficacité pédagogique - Didask Learning App

To finish...

Subjective data, behavioral data... we hope that you now see a little more clearly to measure the educational effectiveness of your training. If there's only one thing you should remember from this article, remember this one: before you go data hunting, Clearly identify what information you want (progress ? difficulty? transferring ? satisfaction?) For then select the measures adapted to your needs. More than the nature of the measures chosen, it is their adequacy with your objective that will make the difference!

About the author
Svetlana Meyer

Svetlana Meyer is Didask's scientific manager. A doctor in cognitive sciences, her role is to integrate the latest results of research on learning into our product to improve the effectiveness of the content created on Didask.

Want to know more or give it a try?

Make an appointment directly with our eLearning experts for a demo or simply more information.