Number/percentage of KM trainings achieving training objectives

Indicator Number: 
11

Category: 

Logic Model Component: 

Data Type(s): 
Count, proportion, qualitative
Short Definition: 
Measures the extent to which KM trainings among staff, and in some instances COP members or partners, achieve training objectives
Definition and Explanation (Long): 
This is an internal indicator, that measures whether KM trainings among staff—and in some instances COP members or partners—achieve training objectives. Those who design or conduct the training set the training objectives in terms of improved skills, competence, and/or performance of the trainees.
Data Requirements: 
Responses to training evaluations, specifically answers to questions about whether or not the training met its objectives; observer comments; trainee test scores, if available.
Data Sources: 
Training records, training evaluation forms, notes of independent course observer, trainee test results
Frequency of Data Collection: 
Semiannually
Purpose: 
This indicator records whether the training has provided the KM skills and knowledge outlined in the course objectives. Ideally, these objectives would be designed to address gaps identified by the KM knowledge audit. In other words, this indicator can provide one way of gauging the degree to which an organization has acted on its knowledge audit (see indicator 1). For example, the KM audit may have found that many staff members do not use the organization’s information and knowledge resources. Training staff about internal KM tools, technologies, and strategies may help solve this problem. In this case, this indicator would measure whether the training led the staff members to increase their use of information/knowledge resources.
Issues and Challenges: 
Courtesy bias can often affect training participants' responses to evaluation questions. Assuring participants that evaluation responses will be kept confidential or made anonymous, by leaving names off evaluation forms, may encourage participants to respond more frankly. In addition, since evaluation forms are not always the best way of evaluating a training (due to a number of factors including courtesy bias, low response rates, and the difficulty of self-reporting on the effects of training that was only just received), other methods may be used to gauge learning and improvements in performance. For example, after training people to use an information technology, trainers could observe the trainees conducting a search on their own or use an online knowledge resource to track usage patterns. This observation could be conducted several weeks after training, if possible, as a measure of whether new knowledge was retained.
Pages in the Guide: 
29-30

Published Year: 

  • 2013
Last Updated Date: 
Wednesday, September 6, 2017