Hello Fabulous Community Members,
Happy New Year! I have an interesting question for all of you....
At Touro College, we use an internal rubric to score the quality of our online courses. This process is very manually (factory-like) and presents a variety of challenges including:
Our QA reviews are completed semesterly. Based on your experiences doing QA reviews for courses, what are some of the methods you use to 1. save time, 2. incentivize faculty to complete necessary modifications, 3. norming among scorers/evaluators? In other words, what does your process look like?
Any information, resources, or guidance you can share is greatly appreciated.
Assistant Director of Instructional Design, Online Education
Touro College and University System
New York, New York
Not sure if any of this is relevant but here's my experience (You may already be doing all this or it may not suit your context):
At a previous institution we had teaching staff responsible for developing online courses and had to meet minimum QA req's. We had drop in sessions for staff for several weeks before semester start to come and get support to meet key QA aspects that they needed in order for their courses to be made live/published (i.e. had to meet minimum QA requirement for the course to then be visible to students). Staff could get 1-1 support in a computer lab to develop and check QA at same time, which then meant we could clarify what wasn't quite right and how to fix it, and they could amend on the spot. It was still resource heavy but it meant easier clarifications, direct support which was more motivating and inclusive for teaching staff, and less paperwork as it could be done verbally on the spot with just some minimal documentation if there were a number of things and staff were going to go away to complete. Having QA tied to course visibility got a lot of push back and meant some courses did not get published at all, however in a post-COVID era I would think it would be less of an issue as courses can't get by without an online presence like they used to.
In terms of norming/moderation/consistency, this is a harder one and we still struggled with this. I think there are a number of challenges rolled into this one: