Showing results for 
Show  only  | Search instead for 
Did you mean: 
New Member

QA Reviews for Online Courses : Oh The Manual Labor...

Hello Fabulous Community Members, 

Happy New Year! I have an interesting question for all of you....

At Touro College, we use an internal rubric to score the quality of our online courses. This process is very manually (factory-like) and presents a variety of challenges including: 

  • Discrepancies between ID scores and Evaluator scores - IDs have to "re-score" 
  • Iterative back and forth between both parties 
  • Tracking faculty interactions and live updates (e.g. who was met with, what was discussed etc)

Our QA reviews are completed semesterly. Based on your experiences doing QA reviews for courses, what are some of the methods you use to 1. save time, 2. incentivize faculty to complete necessary modifications, 3. norming among scorers/evaluators? In other words, what does your process look like? 

Any information, resources, or guidance you can share is greatly appreciated. 


Holly Owens 

Assistant Director of Instructional Design, Online Education 

Touro College and University System 

New York, New York 


0 Kudos
1 Reply
Community Participant

Not sure if any of this is relevant but here's my experience (You may already be doing all this or it may not suit your context):
At a previous institution we had teaching staff responsible for developing online courses and had to meet minimum QA req's. We had drop in sessions for staff for several weeks before semester start to come and get support to meet key QA aspects that they needed in order for their courses to be made live/published (i.e. had to meet minimum QA requirement for the course to then be visible to students). Staff could get 1-1 support in a computer lab to develop and check QA at same time, which then meant we could clarify what wasn't quite right and how to fix it, and they could amend on the spot. It was still resource heavy but it meant easier clarifications, direct support which was more motivating and inclusive for teaching staff, and less paperwork as it could be done verbally on the spot with just some minimal documentation if there were a number of things and staff were going to go away to complete. Having QA tied to course visibility got a lot of push back and meant some courses did not get published at all, however in a post-COVID era I would think it would be less of an issue as courses can't get by without an online presence like they used to. 

In terms of norming/moderation/consistency, this is a harder one and we still struggled with this. I think there are a number of challenges rolled into this one: 

  • Getting everyone on the same page - clear rubrics and examples can help clarify what things might look like. You seem to already have a strong rubric including different benchmarks, maybe including a sample course or screenshot images taken from different courses available via the rubric for specific items might help? Or it may be that a training course or workshop rather than the manual might suit people's needs better - e.g. Quality Matters run a course for training on using the QM rubric, OLC run a course unpacking their OSCQR rubric. Part of that training could then include doing peer review activities and discussing afterwards. Also from experiences with QM and OSCQR a lot of scorecard/rubric elements are open to interpretation and subjectivity and this is well recognised. This can be a good thing (room to meet via different ways), but means examples and moderation training are likely to be needed. 
  • Efficiency - anything too big or cumbersome will be offputting, so saving people time through automation and using templates as much as possible, e.g. welcome announcement template that people can copy or that is pre-loaded into the course that can be modified; template welcome module that has all the key policy info, netiquette and links pre-populated, template course structures for different courses available in commons to import, etc etc. Most people will take the path of least resistance so if you build templates that you feel meet the criteria and market them as time-saving this might help. 
  • Getting buy in - some people don't see the relevance of doing the work, and this is something I've definitely experienced. So there might need to be some socialisation of the rubric and justification of why particular items are included and the evidence for them in terms of improved student experience, retention, etc. Otherwise you may find an element of resistance and trying to subvert processes regardless of what you do. This is where having the processes tied to something else, whether it's course availability or tied to career progression/reporting or some other means might help, or providing people with time release and support to do the work so it's not seen as an extra thing on top of all the other things. 
0 Kudos