I'm using Classic Quizzes as I find the report data vital for the data I need to report to my department.
Today I discovered a discrepancy between the "Correct Student Count" values in the Item Analysis Report vs. the aggregated count of students who got each problem correct from the Student Analysis Report (as counted in Excel with COUNTIF). The Item Analysis reported a 10% (on average) smaller Correct Student Count vs. the aggregated data from the Student Analysis. Note that the assignment did allow two attempts with "keep highest score" enabled. I tested if the Item Analysis was only counting the 1st attempt, but this number did not add up.
Has anyone else noted this issue? If so, can you explain what the Correct Student Count is computing in the Item Analysis?
I tested two different assessments and ran into the same issue on both.