I'm trying to gather all peer-review comments provided by a single user on a single assignment.
I get the submission comments from the Assignment API with:
But I also need the comments entered in the rubrics table and the Rubrics API associates the comment to the rubric and not the assignment.
I've used the same rubric on multiple assignments.
How do I associate the comment in the rubric to the correct assignment?
And finally, is it possible to gather the comments entered into the pdf through cocodoc?
I don't have an example with peer reviews to test against but have you tried adding &includerubric_assessment. For a user submission (/api/v1/courses/:course_id/assignments/:assignment_id/submissions/user_id) it returns the points and comments for each item in the rubric.
Thanks for the reply, Bill.
I have already tried that, but it only returns the rubric assessment of a submitted assignment, not when it is a peer-review.
The Peer-review API only supports include=submission_comments
The information i want is available through the Rubrics API, but I am unable to connect it to the appropriate assignment. The easy way out is to create a separate rubric for each assignment, but I hope not to have to do that.
@sigurd_k_brinch , I found that cross-referencing is possible with submissions of an assignment. artifact_id in Rubrics API results will match to id in Submissions API (for an assignment) and asset_id in Peer Reviews API (for the same assignment).
Unless I'm missing something, what's odd is that Rubrics API doesn't seem to surface the specific assignment in a rubric association. But, thankfully, it's implied by the cross-referencing above.
@RobDitto , I think the perceived oddness might be because rubrics are not necessarily unique to a particular assignment and can be reused, but an assignment can only have one rubric. In other words, rubric is a function of assignment, but assignment is not a function of rubric. It's not a one-to-one relationship.
Understood that's it's not one-to-one. My quibble is that the API doesn't seem to surface rubric_association_id anywhere else but this one GET. A lot of cross-referencing is required to connect peer review scores to an assignment.
I can imagine something like ?include=assignment, or maybe just returning assignment_id, making this much easier to work with.
Alas, I've found that Canvas' priorities don't always include making it easier for us to work with, but instead their APIs often return the information in the way that they need it to work with their Apps. There is no place in their apps that need to look up all of the assignments that are associated with a rubric, so they haven't exposed it. The demand for it just isn't big enough and there is a work around as you mentioned. Depending on their database keys, it may be an expensive lookup using a non-keyed field, and they try to make API calls quick and responsive.
In my work with the API, I've found many places where multiple API calls could be combined into a single one if just [insert needed piece of data] was included.
In some ways, it reminds me of a database. The API is kind of like a normalized database where each table contains a part of the picture but you may need to join several together to get the complete picture. Canvas Data on the other hand, uses a star schema, which is denormalized, and tries to anticipate all of the information that you might need in one place (or with no more than one join) and so information is duplicated in multiple places. Unfortunately, that means with the API that you may need to make multiple calls to get what you want and with Canvas Data you get a lot of stuff that you may not need.
I'm going to admit I don't fully follow this discussion because there's lots of references to API calls and then pronouns that follow and I'm not always sure what the antecedent is. So, forgive me if I just repeat everything that's already been said as I try to walk myself through it.
When you request an assignment (multiple ways to do this)
You get a rubric object returned. This particular one had 19 criteria, I'm only showing the first.
That top level ID is a context code. 521633_6859 is composed of a rubric ID and a criterion ID, separated by an underscore.
Based on that, I can go to the Rubrics API and fetch the rubric information. I do need to know where the rubric came from for this part. Was it an account rubric or a course rubric? I did look into that earlier, and it might be that it copies the account rubric into the course, I'm not sure. Anyway, that's probably not the stumbling point here.
This one might take a while as all of peer reviews are returned at once rather than being paginated.
This is the top of the first assessment object. There are 109 of them in my data because there were 109 peer reviews.
Now let's look at the first assessment.
The artifact_type is "Submission", which means that the artifact_id is a submission_id. If the artifact_type was "Assignment", then the artifact_id would be an assignment. Notice that the Peer Reviews API allows for both types. In this case, it only makes sense to be tied to a submission, otherwise you would have no idea who the peer review was reviewing.
The assessor_id is the user ID for the person who completed the assessment.
The data object contains the results for each of the 19 items in the rubric. On this first one, 4 points were awarded and no comments were left. Here's an example of what it looks like when comments are left.
The data is incomplete, but it may be or may not be enough. If you just want to look at what people said to review their reviews for assigning the reviewer a grade, you may not need to know who where reviewing. If you're pulling all the results together for archival purposes, you would want to know that.
The peer reviews API can pull in the other information.
This returns an unpaginated list of reviews.
As you said, you need to match the artifact_type and artifact_id from the rubric with the asset_type and asset_id from the peer reviews.
Interestingly, the submission_comments are for the user's assessment and not the combination of user and assessor. The person leaving the submission comments in the example is not the assessor. Furthermore, they are repeated if the user received more than one peer review. Watch out that you don't duplicate the comments.
So yes, there is some cross-referencing, but you had to know the assignment ID at the very beginning to get the codes to get the rubric. So it doesn't seem like such a big deal that the assignment isn't returned with the data since you already know it or you wouldn't have gotten this far. It's extra fluff (sometimes fluff is good) since it's not tied at an assignment, it's tied to a submission. The use of (artifact|asset)_(type|id) allows them to specify the key to the information without having to include extra information, but you should match off of both the type and the id to be safe.
But here's my question. I didn't need the rubric_association_id for anything. What is it and why is it important that it be easier to get if you don't use it? Or maybe I'm just missing the picture on it? The documentation in the API is missing a description for that line. It's not the assignment ID the rubric belongs to, since rubrics don't belong to assignments. It's not the submission ID. It's the same for every person using the rubric for an assignment (this may not be true if the rubric was changed in the middle???). It's kind of like Canvas is providing us a single ID for all of the rubrics associated with a single assignment (I don't have multiple assignments to test this with), but otherwise, it's not of use to us because it's not available anywhere else -- at least not yet? I'm not even sure what's a question and what's a statement here .!?
Your question and the discussion that followed inspired me to write a program to output the peer reviews with comments and attached PDF files into a directory tree. This makes it easy for a teacher to review the peer reviews. In the case of one of the courses I am working with this makes it easier to send the comments made on the peer review itself back to the student who wrote the peer review (as unfortunately, Canvas does not seem to make this information available to the peer reviewer).
The program is get_peer_reviews_and_comments.py and can be found at https://github.com/gqmaguirejr/Canvas-tools
The program does not deal with rubrics, as currently, I have a DOCX & LaTeX document that the peer reviewers fill in and add as an attachment to their comments. Using Adobe Acrobat a teacher can markup these peer review documents - as one of the things that we are trying to teach the students how to do is to write peer reviews. Sadly, neither this markup nor grading of peer reviews is directly supported by Canvas.