Recently I was asked to develop an LTI to allow faculty to submit their grades directly from Canvas back to the SIS. The obvious benefit is that faculty would not have to do double entry, i.e. work with the Canvas gradebook and then have to manually translate their grades to the SIS gradebook. This was well received and was well worth the effort. Working through the process of submitting grades, we gained knowledge and ideas associated with this article.
As we implemented at our institution, we received several tickets stating that the Canvas gradebook wasn't reporting the correct grades. What we learned from these tickets was that faculty who are new to LMS in general were not aware of some details that could cause grades to be inaccurate. Faculty who had been using pencil/paper or even an Excel spreadsheet were simply used to doing things themselves, and did not really understand things like grading schemes, or assignment groups, or the effect of ungraded assignments.
Someone might be asking why the faculty didn't receive training, a valid question. In our case training was provided. I am sure we can all agree that everyone learns at their own pace, even faculty. Learning new technology can sometimes be daunting, in a company or on a college campus, and in just about any scenario training or learning is not fully absorbed right away. In our environment, faculty need to focus on what they are teaching, and the tools they use should not interfere with their instruction. It is our job to try and make things easier, not harder.
So we came up with the idea of a "gradebook checker". What if we could inspect the gradebook for items that obviously stand-out, and we could alert the instructor to those items? If we could do this, the instructor could receive a preemptive report showing them which items could potentially cause an incorrect grade to be displayed or reported. The instructor then has an opportunity to correct those items or ask questions to clarify why those items might cause inaccuracy.
While developing the SIS grade submission algorithms, we learned quite a bit about the Canvas API. This development processes gained us a high level of comfort not only with how to use the API, but also with the API documentation, which can be found here:
The API will allow you to build tools to achieve many different goals. I will discuss the API calls that we used to create our "gradebook analysis", and will cover some of the specific details that we look for to alert faculty to potential issues.
Multiple API calls are required to compile the data necessary to create the analysis report. Because of this, we are using a workflow. When the user requests an analysis of their gradebook, we return immediately with a message letting them know their request has been received and they will receive the results in the email. Our aynchronous workflow can then run, throttled if necessary, and complete the report at it's own pace to create the report.
Following is a list of some common items included in our report. Most of these were seen prior to this report being developed, and some were thought of while we were designing the report. Some are more important than others, some are considered simple warning. For example, over-grading might be done on purpose for extra credit, but we have no way of knowing that. If you have common issues that you come across, please share them and let's see if we can expand on the report:
There are many scenarios that can be derived from the data that is available.
Keep in mind that our gradebook analysis request is provided to our faculty through an LTI app. When the faculty request an analysis report, we know who is making the request and which course they are interested in through the LTI integration. I am going to try and cover our approach in very general terms in hopes that it can be translated into whatever solution you have in mind. There is always more than one way to solve a problem, please share any ideas you have. I would love to hear how you would improve or add to this idea.
If some of these API calls seem obvious, keep in mind there may be people reading this who haven't yet used the API, and they may benefit from the obvious. It is also worth mentioning that you will want to store the results of these API calls so the data can be inspected along the way. You may want to use dictionaries, or maybe hash tables, to make lookups easy; it is up to you to store the results using techniques specific to your approach to analyzing the API results.
For reporting purposes, we need to have the course details. Again, the report will be sent in email. If the instructor has requested analysis of multiple gradebooks we want to clearly identify the course in each email.
The API call to get course details can be found here:
This API call returns a JSON object, which is defined on the same page here:
The gradebook is essentially a "view" of the assignments. When you create an assignment, that assignment automatically appears in the gradebook, you do not have to create an entry in the gradebook to represent that assignment. So to analyze the gradebook, you need to analyze the assignments. To do this you will need a list of assignments.
The API call to get a list of course assignments can be found here:
This API call returns a list of JSON objects representing each assignment, the assignment object is defined here:
Note: In my scenario, I stored the list of JSON assignments in a dictionary<assignment_id, assignment> to allow me to easily iterate through each assignment, or lookup a specific asssignment by id.
To determine if there is there are students who have not submitted their work you will need to inspect each assignment for any associated submissions, and determine which students have or have not submitted the work.
The API call to get submissions for an assignment can be found here:
This API call returns a list of JSON objects representing each submission associated with the specific assignment, the submission JSON object is defined here:
Note: In my scenario I stored submissions for each assignment in a dictionary<user_id, submission>, so I could easily find a submission specific to a student, or determine if a submission for a specific submission even existed
We provide student names to direct the instructor to specific items in the gradebook that have not been graded, or work that has not been submitted. To do that we need to retrieve the course roster.
The API call to get a list of enrollments can be found here:
This API call returns a list of JSON enrollment objects, the enrollment JSON object is defined here:
Note: In my scenario I stored enrollments in a dictionary<user_id, enrollment>, so I could easily iterate through each enrollment, or find a specific enrollment by user_id
We also inspect assignment groups for some of the details defined above. Obvious potential issues include weighted assignment groups where a group has a 0% weight, or duplicate group names, or an empty assignment group (particularly if the weight of the group is > 0%).
The API call to get a list of assignment groups can be found here:
This API call returns a list of AssignmentGroup JSON objects, defined here:
Note: In my scenario I stored groups in a dictionary<group_id, assignmentgroup>, so I could easily iterate through the list of assignment groups, or find a specific assignment group by id
The API calls above will essentially gather all the data you need to start "interrogating" the gradebook. How do you do that? It's really all about digging into the JSON objects that have been collected by the API calls, and knowing what variables and information are exposed in those objects. Here I'll give you a few simple examples.
Assume we stored the result of the course details in a variable: courseDetails
Looking at the JSON definition, we want the value of courseDetails.grading_standard_id to be an integer greater than zero.
Looking at the JSON defnition of the assignment objects, we want inspect assignment.due_at
Iterate through each of the enrollment records, and retrieve the submission for each student.
From the submission you should be able to retrieve the assignment, i.e. assignment[submission.assignment_id]
If the submission.score is greater than the associated assignment.points_possible, then that student has been over-graded and we should alert the instructor.
If the instructor intentionally over-graded the student, then it is up to them to simply ignore the warning.
if the instructor did not mean to over-grade the student, then they have the opportunity to correct the score or ask why they are getting the warning.
Once you familiarize yourself with the JSON object definitions, you will start to see how other scenarios can be detected. It's really all about learning what values are exposed in which objects, and building the logic around them. More complex scenarios simply require putting the puzzle pieces together.
By providing this analysis to the instructors, potential problems can be corrected before the data is transferred to the SIS. Problems can also be corrected during the semester to ensure that students are seeing grades that accurately reflect their current status.
Hopefully walking through the use of the API for this scenario gives you some ideas of how you can take advantage of the data that is available to you. If you are using the Canvas Data Portal, you may be able to simplify this process greatly. I hope to have an opportunity to explore that path in the near future and would be interested to know if others are using the data portal for similar goals.
I look forward to any feedback and ideas that others have, please share.
2016.11.14 - Update
If you would like to see source code demonstrating how to make API calls in .NET, I have published a blog here:
From this code you can create automation of administrative tasks and reporting to meet your needs.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.