Classic Quizzes: Learning from incorrect answers to questions and analyzing quiz submissions

Jump to solution
maguire
Community Champion

I've used quizzes in a large for a number of years and some students were unhappy about needing exact matches for short answers and multiple blanks questions. So in anticipations of New Quizzes, I decided to look at converting some of the short answer and some of the fill-in multiple blanks questions to multiple-choice questions. This led to the thought of using some of the incorrect answers that students had submitted as confounding answers for the multiple-choice question. So this and the earlier discussion in https://community.canvaslms.com/t5/Canvas-Developers-Group/Listing-all-question-groups-in-a-quiz/td-... - especially the answer from @James made me write two programs. The first program (quizzes-and-answers-in-course.py) collects all of the quizzes and their questions and answers for a course and all of the URLs of the students' quiz attempts and then outputs this in an XLSX file. However, there is a problem in that the URLs to the student's submissions are direct URLs to the HTML version of graded quiz and I've not yet figured out how to access these via python (due to the local authentication requiring SAML). So while thinking about how to solve the SAML problem, I modified the program to just crate a directory tree of places where the HTML files can be placed and then manually downloaded all of the submission attempts for one quiz - in a course with almost 100 students making one or more attempt on the quiz - for a total of 538 quiz attempts. At about 4 downloads per minute, it took a while to download all of the attempts into the pre-prepared tree.  I wrote a second program to augment the spreadsheet with all of the incorrect answers that were given to short answer and fill in the blank questions. It even computes the balnk_id hashes - so it can reverse the hash and show the correct blank_id for the incorrect answers. This program (augment_quizzes-and-answers-in-course.py) parses the HTM and collect the information and then augments the spreadsheet.

Along the way I learned some interesting things:

As the quizzes show the students the correct answers after a quiz attempt and enables them to take the quiz an unlimited number of times - two interesting behaviors emerged (1) one student who just kept taking the quiz again and again (for a total of 6 times) even though they got all or nearly all of the possible points on each attempt and (2) another student who answered no questions for the first 5 times they took the quiz and then on the 6th attempt correctly answered all of the questions.  The questions were selected from a total of 70 questions (more specifically: 1 of 8, 1 of 9, 1 of 13, and 2 of 39).

The histograms of the number of students who stopped taking each of the 15 quizzes after N attempts were interesting. 

The computation of the hash digest of the blank IDs is:

def compute_canvas_blank_id_digest(bid):
    m=hashlib.md5()
    s1="dropdown,{},instructure-key".format(bid)
    m.update(s1.encode('utf-8'))
    return m.hexdigest()

A nice feature of the 2nd program is that it can create a DOCX file with all of the questions and answers for the whole quiz or just the short answer and fill in the blanks questions and their correct and incorrect. answers. I was able to edit this document into something that I could hand over to the next teacher who will be responsible for using this quiz in a course this Fall.

Both programs are available from https://github.com/gqmaguirejr/E-learning .

My wife has suggested that if I were properly lazy that I should use the Canvas API to generate the new multiple-choice questions automatically (or at least something that could then be edited). However, I've not quite reached that point. Meanwhile, I need to figure out how to automate the downloading of the HTML files as I have another 14 quizzes for this course and then another ~20 instances of this course to process 😉

The quiz I looked at had a total of 24 short answer and fill in the blanks, while the total set of quizzes has 32 fill in multiple blanks questions and 67 short answer questions - so there are a lot of reformulated  questions to write. In addition there are other types of questions with a breakdown of: 'multiple_answers_question': 59, 'multiple_choice_question': 29, 'true_false_question': 164, 'multiple_dropdowns_question': 6, 'matching_question': 14.

 

Labels (1)
1 Solution
maguire
Community Champion
Author

Thanks for correcting the path to the github. I have revised the programs:

quizzes-and-answers-in-course.py
augment_quizzes-and-answers-in-course.py

The first program can now do the correct authentication to access login-protected URLs to get the submitted quizzes (use a toolkit: kth_canvas_saml). Both programs have been revised to handle the case where the 'workflow_state' for a student's submission is "untaken" - this seems to indicate the student started an attempt but did not submit; in this case, the results_html is empty and one needs to take the html_url and formulate a request for submissions up to one less than the number of attempts by this student.

Missing from these two programs is a third program to take the results from the second program and combine the data across multiple Canvas course_ids to integrate the incorrect answers for questions with one or several blanks from many course offerings for the same quiz question.

 

 

 

 

View solution in original post

0 Likes