Register for InstructureCon25 • Passes include access to all sessions, the expo hall, entertainment and networking events, meals, and extraterrestrial encounters.
I'm having trouble understanding the Outcomes feature on Canvas. For my course, I have 5 learning outcomes and each outcome has several quiz questions. I created a separate quiz bank for each learning outcome and linked the outcome to each respective bank.
What I'd like to do now is take those questions and place them into different quizzes. For example I'd like to take a few questions from outcomes 1, 3 and 4 and place them in Quiz 1. Eventually all outcome questions will be used but will be parsed into separate quizzes.
My question:
When I look at the outcome results in the Learning Mastery gradebook, will it be calculated on the number of questions pulled from the question bank OR will it be calculated based on the number of quizzes I made that include those outcomes? As an example, my Outcome 1 has 4 questions. I placed each question into 4 different quizzes. Given my example, if I use the decaying average for my calculation, how will it calculate my outcome based on how I set things up? Because it's from one question bank tied to Outcome 1, will it be considered "a single score" or will each question be considered separate scores. Ideally, I'd like each of the questions to be counted equally for that outcome.
The normal behavior in canvas, when some items have no score yet, is to calculate averages for those items that do have a score. I would expect that you could even use large quiz banks for each objective and set a number of random questions to be selected for each student's quiz. Each student's decaying average should be based on whatever questions that student has been asked. New quizzes, and linking objectives this way, are new, so this process may need some testing to get the view you are looking for.
I've spent so many hours of my life thinking about question banks and outcomes! I'm still using the older quiz type, New quizzes is a whole other mess.
From my experiments, the mastery gradebook treats the items pulled from each question bank as a separate entry into the mastery gradebook, and calculates mastery based on just those items. For example, in this test I had three question banks, all aligned to the same outcome, and pulled one question from each bank. The mastery gradebook treated each bank separately, scoring either a 4 or a 0. The decaying average calculation system means that this student got a 3.3, but a student who got the third question incorrect would score a 1.2. Not at all what I want. I'm now experimenting with adding rubrics to each quiz so that when I grade them, I can also select a mastery level. It's not all nice and automatic, but it's the only way that I've found to have an entire quiz considered as one outcome.
@cynthia_londeor I don't have much experience with linking outcomes to classic quizzes. I was not aware that they were already able to be linked to individual questions before new quizzes. I have used outcomes as criteria in assignment rubrics, however. The outcome is separate from the student's grade. The score on the assignment is based on student success on the overall rubric, which typically includes multiple criteria none, some, or all of which may be linked to an outcome. The outcomes then use only the one criteria from that rubric for a score, and then if the same criteria is repeated in later rubrics there are several options for how the multiple scores are combined. Decaying average is only one of those options. The others are described here: https://community.canvaslms.com/t5/Instructor-Guide/How-do-I-manage-outcome-mastery-calculations-in-...
In general, I think of outcomes as the larger objectives from the institution or specific to the course. Those objectives are aligned to what we measure in individual assignments, but unless you are teaching a competency based course, the objectives do not replace a grade.
The student who had a mastery of 1.2 for missing 2 out of 3 questions would only have a 1.2 in the learning mastery gradebook, while the regular gradebook is assignment based and so it would record a score of 0.33 as the result of only getting one out of 3 questions correct. If a student skips a question, it counts as a wrong answer in a quiz, and likely also in any aligned mastery, but quizzes a student has not yet taken should not be influencing the mastery levels at all.
That 1.2 would be from that specific quiz, and not from the entire course. The same objective can be linked to other quizzes and assignments so that later success at that skill can pull up the student's mastery level for the objective without impacting the student's prior grades on previous assignments which are individually recorded in the regular gradebook.
To interact with Panda Bot in the Instructure Community, you need to sign up or log in:
Sign In
This discussion post is outdated and has been archived. Please use the Community question forums and official documentation for the most current and accurate information.