Where do I find an explanation of the column heading in a quiz Item Analysis?
For example, Middle Student Count Bottom Student Count Quiz Question Count Correct Student Count Wrong Student Count Correct Student Ratio Wrong Student Ratio Correct Top Student Count Correct Middle Student Count Correct Bottom Student Count Variance Standard Deviation Difficulty Index Alpha Point Biserial of Correct Point Biserial of Distractor 2 Point Biserial of Distractor 3 Point Biserial of Distractor 4 Point Biserial of Distractor 5 Point Biserial of Distractor 6 Point Biserial of Distractor 7
Solved! Go to Solution.
No mention in the referenced PDF, Canvs Quiz Item Analysis.pdf, to any of the quiz item analysis column headings. Still looking.
Point biserial means what?
Alpha is defined as "internal consistency of how closely related a set of items are as a group". What is considered an item in a question? What is considered a group in a question?
Ok, I'll try but I'm not a quiz statistician by any means and I don't have a quiz item analysis CSV to look at to give an interpretation of any values. I'm going to tag in my friend firstname.lastname@example.org as someone with a better knowledge of (Canvas) calculations and can translate into plain language for the rest of us.
For the Alpha score, I believe it is looking at each single question and all the student responses for that question as a "set of items" especially when there were folks who chose different answers for the same question. Again without the CSV, I don't know what kind of value it spits out. Is there only 1 single number? or are there numbers for each question?
The Point Biserial is the reliability index of the correct answer. And I don't know how the machine would know what questions are related to each other, but I suppose it looks at how often the correct answer was chosen by folks who get most items correct; this idea is completely my guess. Is this number a decimal? I imagine it's a decimal ratio of responses that chose that answer to all other answers.
From the PDF:
A point biserial is a correlation coefficient that relates observed item responses and is especially used when one set of data is dichotomous, meaning it can take multiple values based on correct and incorrect responses.
Ahh well - Shar
Interesting suppositions and you raise the questions I have about the statistics. I will continue to wait for a definitive answer. Thanks.
The source code has an example page that explains the quiz statistics that are available for each question type. It does not explain what they mean, those explanations are found in statistics textbooks and on websites. Another page in the source code documents that it is constructed for single pre-defined answers in a multiple-choice or true/false question.
If you Google point biserial correlation, you'll find more documentation about it than Canvas has. What I'm writing here is based off of what I Googled and what I got from the Canvas source.
I don't do much with quiz statistics, but the calculations are in the item_analysis/item.rb file. I don't speak fluent Ruby, either, so this is my interpretation of what's going on, but you should defer to statistics sites if my explanations disagree. I tried to compute the point biserial correlation based on the calculations in Canvas, but I got 0.705 when using a sample standard deviation and 0.730 when using the population standard deviation (which is what it looks like Canvas is doing), while they got 0.70065 for the point biserial correlation. When I run the same data through Minitab, I get 0.730 for the correlation between points for the question and the score on the exam.
Without a lot of investigation, I imagine the reason for the difference in the calculations is that my Student Analysis included all attempts of the students who took the quiz, while the Item Analysis only included the kept version of the quiz. For the question I was looking at, I had 15 attempts in the Student Analysis but only 6 students in the Item Analysis. In other words, if you allow multiple attempts like I do, your hand calculations from the Student Analysis may not match the supplied values from the Item Analysis because you're using different data sets.
As far as the calculations go, an item is each answer (response) that is possible for a multiple choice or true false question. Only those students who were administered the question are used in the calculations.
The interpretation of the point biserial correlation is similar to that of the Pearson product moment correlation coefficient. Values close to 0 indicate that this answer is not a good predictor of overall score. Positive values indicate that people who gave that particular answer did better overall, while a negative value indicates that people who gave that particular answer did worse overall. The closer the magnitude (absolute value) of the number is to 1, the more convinced you are that there is a relationship.
For a multiple choice question with more than 2 choices / options / levels / answers / responses, this process is repeated for each answer. If there are 4 possible responses, there would be 4 point biserial correlations. For each of them, the breakdown is between those who gave that response and those who gave some other response. One of the point biserial correlations is between the correct response and all of the incorrect responses. The others (three in this case) are between each of the incorrect responses and the other responses.
Alpha is Cronbach's Alpha. There is a single value of alpha for the entire quiz, but because of the way that the data is delivered, Canvas repeats it for each student. Based on the source code that computes Alpha, each item appears to be a question (the key for the item is the question id).