Proven Post Training Evaluation Tips

10-imageOne of the often over-looked components of training program development is the post-training evaluation. In many cases, the evaluation design is an afterthought as precedence is given to the actual training content.

This is a huge mistake.

Arguably, the design of your evaluations should begin near the start of the entire project. This includes pre-course, in-course, and post-course evaluations. Otherwise you will likely end up using paper-based-smiley-face driven evaluations, which won’t really give you any insight into the course material and how to improve.

When it comes to post training assessment, there are a few “best practice” tips that you can use so as to help maximize its effectiveness and participation. It doesn’t matter if the training is elearning or live, these guidelines apply to both:

Post Training Evaluation Tips

Make evaluation part of course completion metrics – By linking the evaluation to a learner’s proof of attendance, your participation rates will skyrocket.

Administer assessments electronically There are a variety of reasons for this, but mainly because it allows you to generate meaningful reports with the responses received, as well as keep accurate records. Even in a live training session, look to deliver the evaluation electronically.

Use proven evaluation techniques – Don’t just think of random questions, put some time into researching (and using) proven theories for capturing relevant data. There are many evaluation theories that you can go with for your training (my personal favorite being the Kirkpatrick model).

Require Comments – At least one section of your final evaluation should require written (typed) comments. While making every written feedback portion of the evaluation required is a bit too much, having one section is just fine. If you use the first tip, then you won’t have to worry about much backlash to this.

Categories

About the Author:

Justin Ferriman is the co-founder and CEO of LearnDash, the WordPress LMS trusted by the world's leading organizations, such as the University of Michigan, Digital Marketer, WPEngine, and Infusionsoft. Justin has made a career as an elearning consultant where he has implemented large-scale training programs for Fortune 500 companies. Twitter | LinkedIn

5 Comments
  1. I could not agree more! In one of the large multi-nationals I am currently working, we even have the evaluation form completion, which is not available until the course content has been completed, as a pre-requisite to the final course assessment. OK, we cannot ask for feedback on the final assessment itself, but it does raise the importance of the course assessment and it also tends to be completed immediately after completing the content.

  2. I agree that evaluations are critical. Thank you for saying so! Too many people see these as a waste of time.

    Unfortunately, we have to go beyond the Kirkpatrick model.

    As pointed out in:

    Salas, E., Tannenbaum, S. I., Kraiger, K., & Smith-Jentsch, K. A. (2012). The science of training and development in organizations: What matters in practice. Psychological Science in the Public Interest, 13(2), 74-101.

    Which is from a top-tier scientific journal:

    “Historically, organizations and training researchers have relied on Kirkpatrick’s [4-Level] hierarchy as a framework for evaluating training programs…The Kirkpatrick framework has a number of theoretical and practical shortcomings. [It] is antithetical to nearly 40 years of research on human learning, leads to a checklist approach to evaluation (e.g., ‘we are measuring Levels 1 and 2, so we need to measure Level 3’), and, by ignoring the actual purpose for evaluation, risks providing no information of value to stakeholders… (p. 91)

    Their words, not mine.

    But it does suggest that we can do better!

    How to do better? Well, that’s a long discussion. In short, we need evaluation models that are aligned with the research on learning, that help learners make good decision on their smile sheets, and that provide data that is more meaningful than numeric averages.

  3. Dan Topf

    A suggestion: Please don’t call this “evaluation.” It’s actually a survey. It’s a survey that collects data from respondents. If you have that focus, then you can learn how to do it effectively. There are lots and lots of resources on collecting data in this way that is useful.

    Evaluation is the result of the assessment. Assessment is comparing expected outcomes to actual. Simply asking people to respond to a survey is not ‘evaluation.’

    My suggestions on the survey? Align the survey with course results. Ask respondents to assess their skill level on key learning objectives in the course. Ask them to assess their change (improvement?). Compare that to your expected improvement. Collect self-reported examples of learning — ask participants to tell you the most valuable thing they learned with an example of it’s possible application. Collect these and compile them. THEN, follow up with a small number of course alumni to assess longer term transfer of skills and knowledge to the work place. Assess the enabling and hindering factors as best you can.

    No, a form at the end of a presentation is not an ‘evaluation.’ This stuff takes a bit of work, doesn’t it?

0 Pings & Trackbacks

Leave a Reply

86 Reasons To Choose LearnDash ... Show Me →
+ +