Conducting Post-Course Evaluations

Course evaluations are often an afterthought, a last-minute addition to the overwhelming instructional design process. While many instructional designers realize the importance of course evaluations, often the process of corralling SMEs and working on many iterations of multiple courses take precedence over developing evaluations.

The industry standard Kirkpatrick model measures training based on the four levels of analysis:

  • Level 1: Did the learners enjoy training?
  • Level 2: What did the learners learn?
  • Level 3: How did the learners ’ behavior change after attending training?
  • Level 4: What business results can be attributed to the training?

Beyond the standard four levels, there are two other measurements that must be evaluated:

  • Level 5: What is the Return on Investment of the training program?
  • Level 6: Has the participant’s potential been expanded, and ultimately applied to the organization?

Ironically, in many cases, leadership seeks answers to Level 5 and Level 6 evaluations but does not invest in evaluating the basic levels. Instructional designers must ensure that all learning objectives can be successfully evaluated. Let’s take a closer look at some ways to approach evaluation for each level.

Level 1

feedback reportLevel 1 evaluations are “the low-hanging fruit.” Sometimes called “smile sheets,” these evaluations seek to answer simple questions immediately after the training. They may ask participants about the training environment, the relevance of information covered, etc. The key here is to have learners complete these evaluations immediately after the training. These evaluations will tell course designers what adjustments need to be made to improve training effectiveness. Here is a link to a sample Level 1 evaluation using Google Forms embedded within CourseArc.

Level 2

student happy about the score on their test

Level 2 evaluations usually take a form of a test. These evaluations should ask the learner to recall information presented in the course, and provide some pass/fail response to ensure learners are retaining the information. Level 2 evaluations are typically administered at the end of training. It is important that instructional designers carefully craft assessment items to ensure they cover all levels of learning. In addition to testing learners’ understanding and knowledge of the material, Level 2 evaluations may reveal instructional design errors. For instance, if there is an overall trend of learners missing key points, then there is a possibility that content is not presented in the best possible format, and the course may need to be redesigned.

Level 3

behavior change illustrationLevel 3 evaluations are more difficult to create. To measure behavior changes, the learner’s baseline behavior must be compared to the behavior after the training. Level 3 makes learning more than an event. It makes learning a process. One method to evaluate behavior is to send a follow-up questionnaire to managers and supervisors few weeks or months after the training. In this questionnaire, the ISD specialist wants to see if managers notice a difference in behavior as a result of training. For example, managers may notice that an employee now approaches problems differently, or can solve issues faster. One way to capture this information would be to send a follow-up survey via Google Forms or something similar and have summaries shared with management.

Level 4

key performance indicators illustrationLevel 4 evaluations require extensive planning before, during, and after training development. Determining the metrics of success before course development is the first step. Ensuring that training is on track with these metrics is extremely important – if training changes scope, it may also change the metrics. Based on the organization, there may be different outcomes the training is trying to achieve. By working with stakeholders, instructional designers must determine the root cause for potential training prior to developing it. Additionally, together with the SMEs and stakeholders, course designers should determine the desired outcome. For example, if compliance training needs to be developed, instructional designers may try to determine if there are many incidents of non-compliance with policy. They must then determine how many issues occur in the current state, and measure the number of issues that occur after the training. Making the return on investment metric a measurable number is the most ideal way to collect data.

Through careful planning and preparation, instructional designers should be able to design effective post-course evaluations that improve business results.

Learn more about evaluation with our free online course: Principles of Instructional Design: A Roadmap for Creating Engaging eLearning Content – Ongoing Evaluation.


One thought on “Conducting Post-Course Evaluations

  • It is often the case that level 3 and especially level 4 evaluation is out of scope for a LMS or standard learning software. And that’s a pity since there’s a lot to win here. On the one hand, you can gather long-term data and verifying what kind of learning works, you can establish a clear position of learning in your company.

    Second, for eLearners and people working in the industry, Level 3 evaluation is very important – it’s extremely valuable to gather the data on behavioral change. This way we can innovate our methods.

Leave a Reply

Your email address will not be published. Required fields are marked *