Count, proportion, qualitative
Issues and Challenges:
This indicator records whether the training has provided the KM skills and knowledge outlined in the course objectives. Ideally, these objectives would be designed to address gaps identified by the KM knowledge audit. In other words, this indicator can provide one way of gauging the degree to which an organization has acted on its knowledge audit (see indicator 1). For example, the KM audit may have found that many staff members do not use the organization’s information and knowledge resources. Training staff about internal KM tools, technologies, and strategies may help solve this problem. In this case, this indicator would measure whether the training led the staff members to increase their use of information/knowledge resources.
Courtesy bias can often affect training participants' responses to evaluation questions. Assuring participants that evaluation responses will be kept confidential or made anonymous, by leaving names off evaluation forms, may encourage participants to respond more frankly. In addition, since evaluation forms are not always the best way of evaluating a training (due to a number of factors including courtesy bias, low response rates, and the difficulty of self-reporting on the effects of training that was only just received), other methods may be used to gauge learning and improvements in performance.
For example, after training people to use an information technology, trainers could observe the trainees conducting a search on their own or use an online knowledge resource to track usage patterns. This observation could be conducted several weeks after training, if possible, as a measure of whether new knowledge was retained.