Looking to discuss this feature from the 2020-12-19 Release Notes? Post a reply and start a conversation!
- This topic is for feature discussion only—please share use cases, best practices, etc. regarding this feature
- Please thread replies as much as possible to keep posts organized
WHERE SHOULD I POST...?
- Idea enhancement feedback to product managers should be submitted in ideas.canvaslms.com (though linking to the idea here so others can find it is welcome)
- Bug reports should be submitted to Canvas Support—bugs will not be triaged in this thread
Hi @erinhmcmillan. As soon as this document was released, we asked our CSM to enable he feature on our Beta instance so I could determine the implications for our existing users of account-level outcomes. It looks as though this change will limit each account to a single ratings scale with aa uniform set of descriptors for proficiency levels. When I consult with schools and departments about implementing account-level outcomes, I strongly encourage this approach, but I know we have accounts where this is not the case. There are a few different scenarios that I am aware of:
I've supported multiple outcomes assessment tools over the years, including dedicated stand-alone systems, and in my experience, academic units have very strong ideas about how they want to assess student learning. The more rigid the assessment management system, the less likely it will work for the diverse approaches employed across the university. Thus, while I personally appreciate the value of the change described in this update, I am afraid it will make some of our users unhappy.
We'll keep this change turned off for now. But it's important for us to know whether it will be enforced at some point in the future. If so, I'll need to alert our current outcomes users, and make it clear to new adopters that they must use a consistent rating scale across all outcomes. Do you foresee the eventual replacement of the legacy implementation with this a single, uniform scale per account?
The only way that I have found up to this point for tracking progress on outcomes, is to build the outcome into a rubric as one of the graded criteria. Requiring ever teacher to grade every rubric criteria for every assignment out of the same number of points is not reasonable.
Perhaps this update will be practical if the points in the rubric are separately assigned to the assignment score, while the outcome tracked mastery with the standardized scale. That would make the system much more usable. Any rubric criteria could be linked to an outcome, while retaining its existing description and achievement levels - with those achievement levels manually matched to the outcome's mastery levels.
@joe_fahs I've confirmed that the feature has been allowed in the test environment.
Yes, the feature must be turned on to apply to subaccounts, as the feature is built at the course level.
As to your other comments, our amazing product manager @jsailor is mindful of what changes need to be made moving forward and appreciates your feedback!
We recently had this feature turned on in our Beta instance. I greatly appreciate the separation of the outcome statement, the mastery levels, and the calculation at the Account level. However, I agree with both @leward and @joe_fahs regarding the ability to control all aspects of each outcome at lower levels rather than have them lumped into a single mastery level/calculation for all outcomes, especially for those built at the course level.
I do believe that a "default" setting would be useful -- and the this change could be used in that manner. However, we still need the granularity to set mastery and calculation per outcome, whether that is at the course or sub-account levels.
From the administrative point of view, at the Account level, I am ready to use this enhancement. However, I won't/can't turn it on because of the restriction it places on the lower levels. Until this is further enhanced, we likely won't be able to use outcomes in the way we need to.
Perhaps a "permission" to allow per outcome control of all aspects of an outcome at course, sub-account levels by admin, teacher, etc would be a useful approach to resolving this.
Turning this on seems to default the mastery scale to a 0-4 scale. We have imported rubrics (ex. Common Core standards) that are set to a 5, 3, 0 scale. Is that replaced with 0-4 as soon as this is turned on? What are the implications of doing this mid-year if those standards have already been used? Do I need to reset that 5, 3, 0 as the baseline and then make specific course or subaccount mods from there?
@audra_agnelly all the mastery scores would be affected, so you will probably not want to turn it on in the middle of the year. Talk to your CSM if you have additional questions about how it works!
Since we are early in our institutional assessment and rubrics process, I enabled this for our institution. Unfortunately, I disabled it soon after. Prior to enabling, we had created outcomes and built rubrics based on these outcomes. We entered descriptive text for each rating so that faculty could refer to this text as they graded students on the criteria. See image.
After enabling, the descriptive text for each rating was removed. Faculty were able to import the criterion into their assignment rubric or use the entire rubric; however, the verbiage describing the ratings did not appear. Once I disabled, all was right again in the rubric world.
Just wanted to mention. Perhaps I missed something along the way. If so, set me straight. Otherwise, this was a deal-breaker for us. Faculty need the descriptions in order to grade consistently across the institution.
Hi @erinhmcmillan . I am preparing a communication about this change for all of our account admins so that know what's coming down the road. I just did a little more testing and noticed that, admins and instructors who have permission to edit the mastery scale, can change the scale for outcomes imported from a parent or ancestor account. Moreover, when outcomes are imported into a sub-account or course, the mastery scale for the imported outcomes is the scale for the current context, which may be different than the original scale.
As an example, suppose a campus publishes general education outcomes at the root level with a 5 point scale, with mastery at 3. Then an engineering department who needs to collect data on discipline-specific outcomes sets a scale of 3 points with mastery at 2. Courses provisioned into the engineering subaccount will start out with the default scale of their immediate parent, or so it seems. But more importantly, if the gen ed outcomes are imported into an engineering course, those outcomes use the course scale, which is based on the engineering department scale, not the gen ed scale. Now imagine this happening across multiple colleges and department at the university, each of which has set their own rating scale. When the gen ed data is aggregated across schools and departments, the rating scale for a specific gen ed outcome will vary by course, making it very difficult to summarize the raw data. In some classes, a 3 might mean exceeds mastery, whereas in others, it might mean meets or below mastery. As far as I can tell, the only way to ensure a consistent scale for a given outcome is to set an immutable institutional scale. This isn't a good solution, however, because schools and departments are often accountable to external accrediting bodies with specific expectations about assessment.
I am having a hard time understanding why this change was made. Can you explain the rationale behind allowing the scale of outcomes imported from an account to be edited in a child account or course?
@leward apologies that you haven't received a response to your question. I'd recommend posting your questions in Jody's TLDR blog post about outcomes; she asks a few questions where your response could be used: Outcomes: The Cornerstones of Teaching and Learnin... - Canvas Community
@carpenter our teams were recently made aware of this situation. We have updated this document to reflect that outcomes should not be turned off in the production environment, as our teams cannot account for all the use cases that take place across Canvas when outcomes are applied. Our team can fix this for you if you submit a support case and ask it to be addressed.
To chat with Panda Bot, you need to log in to the Community.Sign In
You can ask Panda Bot how to use Canvas, Mastery, Elevate, and Impact products. It can help you find info from our guides and summarize info about the products.