The Instructure Community will enter a read-only state on November 22, 2025 as we prepare to migrate to our new Community platform in early December.
Read our blog post for more info about this change.
Found this content helpful? Log in or sign up to leave a like!
I am a (newish) Canvas Admin for my school and my institution uses Outcomes that are attached to rubrics and are graded/tracked in the Learning Mastery Gradebook. This academic term, our use and application of Outcomes at the course level has been inconsistent due to rapid growth in our instructor population combined with minimal-to-no opportunity to onboard/train; not to mention a few of our returning instructors either forgot how to set this up in their rubrics or chose not to do so.
I was looking through Admin Analytics on the Course Page using the Feature Use in Courses With Activity table to see if I could determine how many courses are using Outcomes vs. those that aren't (to determine specific instructor populations to mandate training and to help with our institutional reporting), and saw a number that made sense with my assumptions about our use of Outcomes. Essentially, it reported that not quite half of our courses show use of Outcomes, which seemed to confirm my suspicions. When I clicked on this bar graph to see which courses were using Outcome, I received a helpful pop-up that showed which courses were using Outcomes and how many had been used.
However, one of the instructors who I *know* is using Outcomes in their course was not present on this list; when I went to their course to double-check, it showed that the instructor *was* using Outcomes and had them aligned with assignments that had been graded. Now I'm wondering if I'm reading this table all wrong and concerned that I'm not understanding how to use this Feature Use in Courses With Activity table at all. If anyone could help me understand this inconsistency, whether it be my error or some issue with Admin Analytics (which I understand is still being tested out), I would be incredibly appreciative!
I can't answer directly to that @SteveWatkinsMel , but there are two account/sub-account level outcomes reports that might give you a better handle on this under 'settings' 'reports' - Outcome Results and Outcome Export.
There are two lines of investigation that come to mind
A third is whether outcomes are reported if o students have submitted to an outcome based assessment at that time.
Are you presetting the outcomes within template rubrics, or expecting colleagues to set them up from scratch ?
Thanks for your response, Paul. I wound up running the Outcome Results report and was able to determine which courses were using Outcomes for assessment based on the information present in that report, so that's a really helpful idea. I was hoping to get a visual alongside the info; the Admin Analytics tool is really neat and I had hoped it could easily present the info I was looking for!
Outcomes are written as Course Level Outcomes (stated in our Course Catalog as determined by Dept. Leads), are preset at the admin/root level in Canvas, and then instructors are expected to attach them to rubrics of their own design. Not all rubrics track an Outcome (as not all graded material is necessarily course outcome driven -- thinking probes for understanding, some reading reflections, those sorts of things), but Outcomes are only ever to be present and assessed within a rubric, if that process makes sense.
We've been sort of building the airplane while flying it with this process, but have, up until this term, had a pretty smooth process for tracking these Outcomes at the course level. I checked two instructors against each other, one that was present on the list of those using Outcomes according to the data in Admin Analytics, and the one that I mentioned in my OP who was using Outcomes in their assignments but did not show up in the Admin Analytics report. It appears that the instructor who *did* appear in the report did not attach outcomes to their rubrics, but assessed outside of the rubric structure. Further, other instructors present on the list in Admin Analytics showed having used Outcomes, but did not assess using them separately or in rubrics. They seemed to have been marked as using Outcomes in Admin Analytics just by adding them to their course, and I think the "use" may simply by associated with a user viewing the Outcomes page in their course. So, the data might not actually track the application of the Outcome to a rubric, but the instructor (as the Outcomes page is hidden from students) viewing the Outcomes page itself. Which is not super helpful for my purposes, because an instructor who has used Outcomes previously can import those same Outcomes into their new course with a rubric they intend to use again, or even from the rubric creation process, without ever visiting the Outcomes page of their course.
So...mystery solved? Maybe? Could be I'm just asking Admin Analytics to do something it wasn't designed/expected to do.
I think Admin Analytics is still evolving, Steve. so it's always useful to cross check against other reports. It would be beneficial to see a highly granular account of what the various analytics tools are counting - I'd agree that there is a difference or progression of value in terms of
Very useful to hear how you have been using these (thank you) - we still don't have a consensus on Canvas Outcomes in my institution, and so far there has been relatively little use of Canvas Mastery/Outcomes. The main advantage that I see is the ability to report outcomes achieved across different units of delivery (canvas courses) back to centre - apart from badges this would seem to be the only opportunity to associate Canvas Courses with Program level achievement, be that Program or teaching Unit outcomes, graduate skills, employability or any other measure ....
I'm not sure about Admin Analytics (as I haven't used it a ton) but in terms of the built in reports (that can be generated on that "Reports" tab at each account/subaccount level and Canvas data - putting an outcome into a rubric is enough for that criterion/item to generate a record in the outcomes reports. It DOES NOT have to actually be used/graded (scored/"boxes clicked" in the rubric). So when I run the raw data reports each semester for our assessment efforts, I have my data queries programmed to filter out all of these "ghost outcomes" that were added to rubrics, but not actually scored in the rubric in speed grader when the instructor graded as they are essentially blank since they were added to a rubric but not scored.
Thank you for the additional clarity about what the reports actually record with regard to Outcomes! That's super helpful for me to know as well. Sounds like I'll need to do some additional filtering myself to get the most accurate record of how we're assessing as an institution. Thanks again!
Community helpTo interact with Panda Bot, our automated chatbot, you need to sign up or log in:
Sign inTo interact with Panda Bot, our automated chatbot, you need to sign up or log in:
Sign in