I just finished grading an exam. The grade book shows my best student got a 99%, the analytics shows the highest score is 81%.
I called tech support, and they said that this is normal behavior. Canvas only looks at what the student has submitted, but manual changes (like when you have an essay question that requires manual grading) are ignored in the calculations.
This is, to quote the very sympathetic and quick-to-pick-up tech support person, asinine. Who could even contemplate a system in which the statistics don't reflect reality?
Also, as long as I'm venting, the analytics difficulty index is misnamed - it is the proportion of students who get a question correct, so it is the easy index.
In old gradebook analytics, the graphs accurately reflect the post grading scores of my entirely manually graded essay based exams, as well as the score assigned for assignment submissions. There is no reason why new quizzes or new analytics should be any different. The question by question analytics within each quiz has always had limited information about essay questions. I do see that the new quiz question analysis shows a mean score for my graded essay questions. Is it the new gradebook analytics that only looks at autograded scores? If so, that seems really impractical and like a bug that needs to be fixed rather than expected behavior.