Stupidly Simple Strategy to Demonstrate Training Effectiveness

evaluation-imageOne of the most important aspects about any training initiative is gathering user metrics in the effort to analyze the effectiveness of the training. Sadly though, this is often the last thought about area prior to going live. Most of the attention is on the training content, delivery mechanisms, schedule, and so forth – but I would argue that end-user evaluation should be one of the first items planned out.

In a typical training scenario, there is a survey administered at the end of training to capture user feedback, what is often referred to as Level 1 Evaluation. This information does provide value (assuming it is done properly), but it only tells one part of the story. Even if you don’t have a ton of time to dedicate to robust user analytics, you can improve upon the Level 1 reporting data by simply administering a similar survey prior to the training being taken.

For example, let’s say you have a two day workshop that you are going to be training to a particular skill. You could require a 30 minute pre-requisite elearning course where users are introduced to the topic. Since many elearning programs allow you to create quizzes and surveys, this is an opportune time to get baseline data on your users. Prior to the live training of the event, you can look into this data to see what areas you should spend more time on, resulting in more effective training. At the end of your two day workshop, you administer a similar survey in the effort to see how well your users progressed.

This simple technique for Level 1 evaluations will go a long way in validating the metrics you provide. Your data becomes a little more meaningful when reporting upon the results of the training. Incorporating Level 2 and/or Level 3 evaluations will take your reporting to the next level (but we’ll save that for another post).

Tools for Evaluation

Everyone has their favorite tools they use for end-user evaluation, but I thought I would share the ones I prefer. The first of which is a popular one, and for good reason. I have used SurveyMonkey on quite a few consulting engagements and find it not only intuitive, but very flexible. The reporting function in SurveyMonkey is second to none. With the click of a button, you can export graphs and data for each question and the number of responses. It really is handy for streamlining the reporting process. Although there is a monthly fee for SurveyMonkey, it isn’t over-the-top.

If you’re on tight budget though, then I suggest you take a look at LimeSurvey – a free open-source surveying tool similar to SurveyMonkey. This tool has many of the same features as SurveyMonkey and offers decent reporting metrics as well. It will take you some time to become accustomed to the interface, but it’s nothing too challenging.

Summary

There are many opinions and frameworks when it comes to metric gathering and evaluations. The important thing is not which one you use, but that you don’t ignore it. Although “boring”, data is what makes training tangible for leadership. If you can effectively show employee performance improvement (or perhaps ROI) with your data, then you will go a long way in validating the importance of the training you create.

Categories

About the Author:

Justin Ferriman is the co-founder and CEO of LearnDash, the WordPress LMS trusted by the world's leading organizations, such as the University of Michigan, Digital Marketer, WPEngine, and Infusionsoft. Justin has made a career as an elearning consultant where he has implemented large-scale training programs for Fortune 500 companies. Twitter | LinkedIn

4 Comments
  1. Dear Justin,

    Can SurveyMonkey or LimeSurvey be incorporated into a LearnDash course in a way to facilitate Level 1,2 , & 3 evaluations? If so, what approach should be used. It would be great to use pre & post tests from all the learners to demonstrate how effective a given course might be.

    Sincerely, Steve

  2. Justin,
    Nice catchy title (good marketing strategy). Yes, end-of-course survey are a necessary evil. However, research shows that much more is needed in addition to student satisfaction and perceptions. If I can finish my dissertation, I hope to offer some pre-course evaluation techniques to judge the inherent pedagogical quality of a course or training module based on Merrill’s E3 quality rubric.

    In the profession of ID/ISD, we have some proven strategies that even the most ardent constructivist cannot argue with. Early research by Frick and others have shown high correlation between student satisfaction/perceptions of quality and Merrill’s First Principles of Instruction.

    Would I forego an end-of-course survey? Never. I am just saying that we need more to accurately measure the effectiveness of our course development.

  3. John

    Hi Justin,

    You hit the nail correctly. Could you pls elaborate possibly with as example on the above para:

    “For example, let’s say you have a two day workshop that you are going to be training to a particular skill. You could require a 30 minute pre-requisite elearning course where users are introduced to the topic. Since many elearning programs allow you to create quizzes and surveys, this is an opportune time to get baseline data on your users. Prior to the live training of the event, you can look into this data to see what areas you should spend more time on, resulting in more effective training. At the end of your two day workshop, you administer a similar survey in the effort to see how well your users progressed.”

    Want to hear your valuable insights more on How did you do it. Any material or links to go through would be great.

    Thanks
    John

0 Pings & Trackbacks

Leave a Reply

86 Reasons To Choose LearnDash ... Show Me →
+ +