When a rubric is edited at the sub-account or course level the criteria and rating descriptions update for feedback already delivered to students. This creates a situation where we don't have proper visibility of the feedback students were given.
The idea is to create versioning for rubrics in a similar way to classic quizzes where the rubric has a version number and an associated student rating is tied to the version of the rubric when the marking occurred. Subsequent edits to a rubric will increment the version for that rubric and future marking will reflect the updated content of the rubric. Rubrics would also get a 'created_at' and 'updated_at' value available through the API.
We have a school that is looking to implement some consistent marking through the use of sub-account rubrics and my current advice to them is to use version numbers in rubric titles and to delete old rubrics when there is an update. It would be better if this were built into the system.
We also have appeals processes that rely on us having an accurate record of the feedback supplied to the student. If a student says "that's not the feedback I received" we have no way of challenging the assertion, not even an updated_at value for the rubric. Also, no data is available in Canvas Data (v1) for rubrics. In some cases the student would automatically win the appeal as we have no way of supplying the relevant data.