Hi Ann
I imagine this is the golden goose for a lot of us, and honestly, even if you could put some code together the reporting would lack necessary interpretation.
So for example, with a bit of coding it wouldn't be too hard (for a coder) to run API calls that cycle through a set of courses and capture information about the modules in each (see Modules - Canvas LMS REST API Documentation). But that doesn't give you the 'real' picture (and note the technical limitations in the API doc - more calls are really required). What if the course also has Files and Pages visible, and this isn't consistent with your desired standard? Or what if the modules expected are there, but the nature of the content within is lacking? This is where only humans can truly interpret and measure quality. Not only that - we can recognise deviations that aren't 'wrong', but are in fact innovations, and we can learn from them.
To really measure success I honestly believe a course needs to be interpreted holistically by a knowledgeable human. I therefore believe the route to be able to measure success efficiently requires making more knowledgeable humans to be able to self or peer audit courses. This requires not just a change in knowledge, but also a change in culture.
And I'm afraid that is just as challenging to achieve as coding for non-coders is. But I do think it's the right challenge 
This discussion post is outdated and has been archived. Please use the Community question forums and official documentation for the most current and accurate information.