Hi @mmoore1 . This is a good question and like you suggested, reaching the last two levels in Kirkpatrick's model can be difficult to determine. In the past we have tried to measure change in behavior with surveys to those who attended training but although we could, we normally do not actually go into the Canvas course and look around. This does give us some type of measure of behavior change, although it is self-reported data. In terms of results, at times we will do course reviews for faculty members using a QM-like list on a class that is in progress or recently finished. Normally this includes reviewing student surveys embedded in the class that gauge their view of course elements from the end-user standpoint. We will also sometimes be asked to review courses if there is a high failure or withdraw rate to see if we can determine potential improvements. Again, these may not be perfect but give us some useful information.
In terms of what sessions reach these higher levels, normally organically these are the sessions that hit on technologies or techniques that faculty are expected to utilize. At my institution faculty must use Canvas to post a syllabus, keep an updated gradebook, and for some classes take attendance. Training related to these tasks or improving the efficiency of work in these areas normally results in direct behavioral change and an easier time measuring results. We also ask departments to let us know what training they would like and when to offer this to their faculty. When the department is involved and the department chair is involved, the training is normally more targeted to immediate needs and behavior change is greater.
I am very interested in hearing the insight of others in the Community so hope that more folks chip in!
This discussion post is outdated and has been archived. Please use the Community question forums and official documentation for the most current and accurate information.