Individualized Quiz Questions

0 Likes

In an effort to curtail cheating on exams, a British university has created chemistry exams with a unique data set for each student -- making it impossible for students to share answers. The data sets are automatically generated. In this case, the exams were completed on paper and scored by hand.

Taking this process a step further, I am suggesting a new capability for Canvas quizzes to automatically generate individualized quiz questions based on parameters set by the instructor/designer. Scoring would be automated as Canvas would evaluate the correct answers for each student.

See this Inside Higher Ed article, Could Custom Exams Prevent Cheating? and The Journal of Chemical EducationUnique Data Sets and Bespoke Laboratory Videos: Teaching and Assessing of Experimental Methods and D....

15 Comments
KristinL
Community Team
Community Team
Status changed to: Moderating

Hi @dmurphy1 -

Thanks for sharing your idea and its associated resources with the Community.

When you said that you'd like to generate individualized quiz questions based on parameters set by the instructor/designer, which parameters would like to have as options? I think the more specific you can be, the more clearly Community members will be able to understand your proposal and how it may differ from Question Banks.

What are question banks? 

How do I create a quiz with a question group linked to a question bank? 

 

dmurphy1
Community Participant

Here's a simple example of individualized data sets for student taking a quiz with "rate, time, distance" math problems. You could use a question bank and create several variations of the question "Given a rate of x mph, how many miles would you travel in y hours." By taking a random subset of questions from a question bank, you could reduce, but not entirely eliminate, the chances of students sharing answers. What I am proposing instead: the instructor sets up a single quiz question with a formula for calculating distance (d = r x t) and provides a range of values for rate and time to generate a unique data set for each student (e.g. rate > 20 & < 80 and time > 5 & < 24).

So, this question is generated for student A:

"Given a rate of 32 mph, how many miles would you travel in 15 hours."

and this question is generated for student B:

"Given a rate of 57 mph, how many miles would you travel in 9 hours."

The formula the instructor provided is used to evaluate each student's response.

Obviously, higher forms of math call for more sophisticated formulas and larger data sets. I realize there are a lot of technical considerations to implement this functionality, but I thought this was a potentially effective approach to address this issue of cheating on online exams.

KristinL
Community Team
Community Team
Status changed to: Open
 
milesl
Community Contributor

@dmurphy1 Are you familiar with the formula question type? They currently support the scenario you describe: How do I create a Simple Formula quiz question? - Instructure Community (canvaslms.com)

dmurphy1
Community Participant

Simple Formula quiz questions are a good start (and the name doesn't promise any more than that), but I was thinking of the complex data sets used in our Data Science program. Based on your comment, I took another look at the Simple Formula questions to see what is possible.

I tried to set up this classic exam question:

Given a right triangle with a side A [a] inches long and a side B [b] inches long, what it the length of the hypotenuse C?

The question only works if I wanted the students to give the length of the side C squared, but not it's actual length.

You could say that I am proposing an expansion of the Simple Formula question functionality to support, not just Data Science, but all branches of high school and college math.

James
Community Champion

@dmurphy1 

I'm reading through this and must admit I'm confused about what you're after.

At first, I thought you were requesting the ability to make sure that quiz questions from a bank weren't duplicated. Canvas does not track which question have been used so Bob and Suzy may end up getting the same question. That has been requested in other places before, but that doesn't seem to be what you're after.

Then I thought you might be asking about formula questions, which already exist.

Then I thought you were asking for more complicated capabilities within formula questions. This has often been requested for years. It appears you're using classic quizzes based off the reference to [a] and [b], but the functionality is no better in new quizzes.

When I read the example you gave, it suggests that you maybe aren't aware of what can be done with formula questions. There are helper functions, which include the square root function sqrt(). You can create a formula that is sqrt(a^2+b^2) to get the length of the hypotenuse. There's a link to the helper functions in the blue section at the top of the document that @milesl linked to. Here is the direct link to the PDF.

All of that said, there are some huge deficiencies in formula questions. For example, you cannot generate nice answers and use them in the question text (give a perfect square in the question and ask for the square root). I've posted some blogs about hacks to accomplish things and there is a Chrome extension that was written to provide more functionality.

With the latest post, you want an expansion, but I'm not sure that means. It's overly broad, like saying "I'd like Canvas to be better." There have been requests for better randomization, the ability to use calculated text in questions, and other enhancements over the years. Those are specifics, but I'm still not sure what you're after and why this should be a different feature request.

As a math teacher, I feel the pain. We've been asking for this for years without movement from Canvas. Right now, Canvas isn't developing classic quizzes anymore, so nothing is going to change there. They're putting their work into new quizzes, but the formula questions there saw a reduction in the capabilities over classic quizzes.

dmurphy1
Community Participant

Admittedly, I did not see the helper functions document. My intent was to start a conversation, not provide a complete solution. I was hoping that other members of the Canvas Community could help this idea evolve. So my idea is broad: I'd like Canvas to be better... in the ability of quizzes to generate individualized data sets for complex math problems to reduce the opportunities for cheating.

dmurphy1
Community Participant

@James  @milesl 

In considering this discussion, I realized two things:

  1. I have more to learn about the Simple Formula questions.
  2. I wasn't clear about my original use case, which was the need for unique data tables for data science exam questions.

If an exam question asks students to simply solve an equation, I understand that Simple Formula questions have the functionality to produce a unique set of values for each student. However, Data Science students are asked to interpret, analyze and illustrate trends in data tables. I was proposing that Canvas quizzes could generate an individualized data table for each student and evaluate the student's answer relative to the table they were given.

Even with a small table, the probability that two students would have the same data (so they could compare or share answers) is so small that cheating would be impossible (and discouraged).

This isn't just a narrow, selfish request: individualized data tables would be useful for exam questions in a variety disciplines; from math to physical sciences to social sciences.

Thanks for you feedback and consideration.

maguire
Community Champion

As was pointed out by @James you cannot be sure that the question is never given to another student; however, you can generate a lot of alternative questions and since they are randomly selected - you can reduce the probability that two students get the same question. For classic quizzes, I have used a program to generate hundreds of alternative questions - each question can have its own contents, data tables, etc. I do not know if there is an upper limit on the number of alternative questions you can have Canvas choose among, but I never found a limit (although viewing the questions via the GUI could take a long time - I did not really care since I generated them with a program).

Is there an API for entering questions in New Quizzes (yet)?

On a related issue, today I was looking at mining data from students' responses to quiz questions. One of the things that I found that I think is missing is a field that says which questions were selected from the question bank for a given student's quiz. The fields returned via GET /api/v1/courses/:course_id/quizzes/:quiz_id/submissions

are from the 'quiz_submissions' part of the response:

quiz_id
id
submission_id
quiz_version
user_id
score
kept_score
started_at
end_at
finished_at
attempt
workflow_state
fudge_points
quiz_points_possible
extra_attempts
extra_time
manually_unlocked
validation_token
score_before_regrade
has_seen_results
time_spent
attempts_left
overdue_and_needs_submission
excused?
html_url
result_url

The html_url and result_url return a part of the HTML for the whole submission for a given quiz - so this might include many questions and their answers, 
If you look at the HTML you can find the question IDs, as shown in the figure below:
Quiz-submission-Screenshot_20220223_145401-highlighted.png

Some of the relevant HTML is shown below:

 

<div id="questions" class="assessment_results show_correct_answers">

<div role="region" aria-label="Question" class="quiz_sortable question_holder " id="" style="" data-group-id="">
<div style="display: block; height: 1px; overflow: hidden;">&nbsp;</div>
<a name="question_274816"></a>
<div class="display_question question true_false_question correct" id="question_274816">

 

Here we can see the question id  question_274816 and this can be found in the set of questions returned by GET /api/v1/courses/:course_id/quizzes/:quiz_id/questions

We can also see the type of the question, in this case: true_false_question.

So one could look at all of the submissions and see whether or not two students got assigned the same question.

 

 

dmurphy1
Community Participant

@maguire I'd like to learn more about the program you use to generate numerous questions.

In the solution I am proposing, the questions can be the same because the data each student would need to answer the questions would be randomized/individualized.

Consider this example:

Student 1 would be presented with this table:

Year Salaries ($000s) Operations ($000s) Bonuses (%) Interest ($000s) Taxes ($000s)
2019 288 98 3.00 23.4 83
2020 342 112 2.52 32.5 108
2021 324 101 3.84 41.6 74

 

Student 2 would be presented with this table:

Year Salaries ($000s) Operations ($000s) Bonuses (%) Interest (%) Taxes ($000s)
2019 306 98 2.79 28.2 92
2020 289 112 2.89 35.3 101
2021 314 101 3.08 31.7 76

(Please don't belabor these number for their realism, they are totally random.)

Both students would be asked the same questions, such as

  • The total amount of Bonuses paid by the company during the years given is what percent of the total amount of Salaries paid during this period?
  • What is the ratio between the total expenditure on Taxes for all the years and the total expenditure on Operations for all the years respectively?

As I mentioned previously, randomizing the data tables that students received would effectively ensure that no two students could compare or exchange answers. Certainly not during a timed quiz. The complexity of the task would be an adequate deterrent in itself.

Thanks.

maguire
Community Champion

In the sense of Canvas classic quizzes each question consists of the question and any data that goes with it, so even if the text of the question is the same - if the data is different this is a different question (hence may have its own specific answer). The key is to generate a lot of these questions and answers.

#!/usr/bin/python3
#
# ./insert_formula_questions.py course_id [quiz_id]
#
# inserts a calculated_question into a quiz for the indicated course.
#
# G. Q. Maguire Jr.
#
# 2017.06.19
#

import csv, requests, time
from pprint import pprint
import optparse
import sys

#from io import StringIO, BytesIO

import json

# for regular expression processing
import re

# Use Python Pandas to create XLSX files
import pandas as pd

#############################
###### EDIT THIS STUFF ######
#############################

# styled based upon https://martin-thoma.com/configuration-files-in-python/
with open('config.json') as json_data_file:
       configuration = json.load(json_data_file)
       access_token=configuration["canvas"]["access_token"]
       baseUrl="https://"+configuration["canvas"]["host"]+"/api/v1/courses/"


modules_csv = 'modules.csv' # name of file storing module names
log_file = 'log.txt' # a log file. it will log things
header = {'Authorization' : 'Bearer ' + access_token}
payload = {}


##############################################################################
## ONLY update the code below if you are experimenting with other API calls ##
##############################################################################

def write_to_log(message):
       with open(log_file, 'a') as log:
              log.write(message + "\n")
              pprint(message)

def list_quizzes(course_id):
       global Verbose_Flag

       quizzes_found_thus_far=[]

       #List quizzes in a course
       # GET /api/v1/courses/:course_id/quizzes
       url = baseUrl + '%s/quizzes' % (course_id)
       if Verbose_Flag:
              print("url: " + url)

       r = requests.get(url, headers = header)
       if Verbose_Flag:
              write_to_log("result of getting quizzes: " + r.text)

       if r.status_code == requests.codes.ok:
              page_response=r.json()

       for p_response in page_response:  
              quizzes_found_thus_far.append(p_response)

       # the following is needed when the reponse has been paginated
       # i.e., when the response is split into pieces - each returning only some of the list of modules
       # see "Handling Pagination" - Discussion created by tyler.clair@usu.edu on Apr 27, 2015, https://community.canvaslms.com/thread/1500
       while r.links['current']['url'] != r.links['last']['url']:  
              r = requests.get(r.links['next']['url'], headers=header)  
              page_response = r.json()  
              for p_response in page_response:  
                     quizzes_found_thus_far.append(p_response)

       return quizzes_found_thus_far

def create_quiz(course_id, name):
       global Verbose_Flag

       #Create a quiz
       # POST /api/v1/courses/:course_id/quizzes
       url = baseUrl + '%s/quizzes' % (course_id)
       if Verbose_Flag:
              print("url: " + url)
       payload={'quiz[title]': name}
       r = requests.post(url, headers = header, data=payload)
       write_to_log("result of post creating quiz: " + r.text)
       if r.status_code == requests.codes.ok:
              write_to_log("result of creating quiz in the course: " + r.text)
              page_response=r.json()
              print("inserted quiz")
              return page_response['id']
       return 0

def create_quiz_question(course_id, quiz_id, question):
       global Verbose_Flag

       global quiz_question_groups
       print("in create_quiz_question")

       question_category=question['question_category']
       print("question_category={}".format(question_category))
       quiz_group_id=quiz_question_groups.get(question_category)
       # if the group already exists for this category, then use the quiz_group_id, else create the quesiton group
       if quiz_group_id is None:
              quiz_group_id=create_quiz_question_group(course_id, quiz_id, question_category, question)

       print("quiz_group_id={}".format(quiz_group_id))



       # Create a single quiz question - Create a new quiz question for this quiz
       # POST /api/v1/courses/:course_id/quizzes/:quiz_id/questions
       url = baseUrl + '%s/quizzes/%s/questions' % (course_id, quiz_id)

       if Verbose_Flag:
              print("url: " + url)
       payload={'question':
                {
                       'question_name': question['question_name'],
                       'points_possible': question['points_possible'],
                       'question_type': question['question_type'],
                       'question_text': question['question_text'],
                }
       }
       
       ans=question.get('answers')
       if ans:
              payload['question']['answers']= ans

       gid=question.get('quiz_group_id')
       if gid:
              payload['question']['quiz_group_id']= gid

       v=question.get('variables')
       if v:
              payload['question']['variables']= v

       f=question.get('formulas')
       if f:
              payload['question']['formulas']= f

       tol=question.get('answer_tolerance')
       if tol:
              payload['question']['answer_tolerance']= tol


       digs=question.get('formula_decimal_places')
       if digs:
              payload['question']['formula_decimal_places']= digs


       #if question.get('correct_comments'):
       #       payload.update({'question[correct_comments]': question['correct_comments']})
       #if question.get('incorrect_comments'):
       #       payload.update({'question[incorrect_comments]': question['incorrect_comments']})
       #if question.get('neutral_comments'):
       #       payload.update({'question[neutral_comments]': question['neutral_comments']})
       #if question.get('text_after_answers'):
       #       payload.update({'question[text_after_answers]': question['text_after_answers']})

       print("payload={}".format(payload))
       r = requests.post(url, headers = header, json=payload)

       write_to_log("result of post creating question: " + r.text)
       if r.status_code == requests.codes.ok:
              write_to_log("result of creating question in the course: " + r.text)
              page_response=r.json()
              print("inserted question")
              return page_response['id']
       return 0



def create_quiz_question_group(course_id, quiz_id, question_group_name, question):
       # return the quiz_group_id

       global Verbose_Flag

       # quiz_groups will be a dictionary of question_category and corresponding quiz_group_id
       # we learn the quiz_group_id when we put the first question into the question group
       global quiz_question_groups

       print("course_id={0}, quiz_id={1}, question_group_name={2}".format(course_id, quiz_id, question_group_name))

       quiz_group_id=quiz_question_groups.get(question_group_name)
       # if the group already exists for this category, then simply return the quiz_group_id
       if quiz_group_id is not None:
              return quiz_group_id

       # Create a question group
       # POST /api/v1/courses/:course_id/quizzes/:quiz_id/groups
       url = baseUrl + '%s/quizzes/%s/groups' % (course_id, quiz_id)

       if Verbose_Flag:
              print("url: " + url)
       payload={'quiz_groups':
                [
                {
                       'name': question_group_name,
                       'pick_count': 1,
                       'question_points': question['points_possible']
                       #'question_points': question['points_possible']
                }
                ]
       }

       print("payload={}".format(payload))
       r = requests.post(url, headers = header, json=payload)

       write_to_log("result of post creating question group: " + r.text)
       print("r.status_code={}".format(r.status_code))
       if (r.status_code == requests.codes.ok) or (r.status_code == 201):
              write_to_log("result of creating question group in the course: " + r.text)
              page_response=r.json()
              if Verbose_Flag:
                     print("page_response={}".format(page_response))
              # store the new id in the dictionary
              if Verbose_Flag:
                     print("inserted question group={}".format(question_group_name))
              # '{"quiz_groups":[{"id":541,"quiz_id":2280,"name":"Newgroup","pick_count":1,"question_points":1.0,"position":2,"assessment_question_bank_id":null}]}')
              quiz_group_id=page_response['quiz_groups'][0]['id']
              quiz_question_groups[question_group_name]=quiz_group_id
              if Verbose_Flag:
                     print("quiz_group_id={}".format(quiz_group_id))
              return quiz_group_id

       return 0

def main():
       global Verbose_Flag
       global quiz_question_groups

       # constant(s)

       # assumed upper limit of number of alternatives in multiple choice questions
       max_choices=10

       # assumed upper limit on number of question fields that are common
       max_question_fields=10


       parser = optparse.OptionParser()

       parser.add_option('-v', '--verbose',
                         dest="verbose",
                         default=False,
                         action="store_true",
                         help="Print lots of output to stdout"
       )

       options, remainder = parser.parse_args()

       Verbose_Flag=options.verbose
       if Verbose_Flag:
              print('ARGV      :', sys.argv[1:])
              print('VERBOSE   :', options.verbose)
              print('REMAINING :', remainder)

       # add time stamp to log file
       log_time = str(time.asctime(time.localtime(time.time())))
       write_to_log(log_time)   

       if (len(remainder) < 1):
              print("Insuffient arguments\n must provide course_id quiz_id\n")
              return

       if (len(remainder) >= 1):
              course_id=remainder[0]
              if Verbose_Flag:
                     print("course_id={}".format(course_id))

       quiz_id_valid=False
       question_group_name='Newgroup'
       quiz_question_groups=dict()
       question=dict()
       quiz_name='sample multiple answer quiz'


       if (len(remainder) >= 2):
              quiz_id=remainder[1]
              if Verbose_Flag:
                     print("quiz_id={}".format(quiz_id))
              #check that this quiz_id is valid
              quizzes=list_quizzes(course_id)

              for q in quizzes:
                     if Verbose_Flag:
                            print("valid q['id']={}".format(q['id']))
                     if q['id'] == int(quiz_id):
                            if Verbose_Flag:
                                   print("valid quiz_id={}".format(quiz_id))
                            quiz_id_valid=True
                            break
       else:
              # need to create a quiz
              quiz_id=create_quiz(course_id, quiz_name)
              quiz_id_valid=True
              if Verbose_Flag:
                     print("quiz_id={}".format(quiz_id))


       # if the qiuz_id is not valid then there is nothing to do
       if not quiz_id_valid:
              print("quiz_id={} is not valid".format(quiz_id))
              return

       # now do question type specific processing
       question=dict()
       question['question_type']="calculated_question"
       question['question_name']="Q6"
       question['points_possible']=1
       question['question_text']="<p>Simple formula question. <span>What is 5 plus [x]?</span></p>"
       question['question_category']='Unknown'

       question['answers']=[
              {'answer_weight': 100, "variables": [{"name": "x", "value": "0"}], "answer_text": 5 },
              {'answer_weight': 100, "variables": [{"name": "x", "value": "1"}], "answer_text": 6 },
              {'answer_weight': 100, "variables": [{"name": "x", "value": "2"}], "answer_text": 7 },
              {'answer_weight': 100, "variables": [{"name": "x", "value": "3"}], "answer_text": 8 },
              {'answer_weight': 100, "variables": [{"name": "x", "value": "4"}], "answer_text": 9 },
              {'answer_weight': 100, "variables": [{"name": "x", "value": "5"}], "answer_text": 10},
              {'answer_weight': 100, "variables": [{"name": "x", "value": "6"}], "answer_text": 11},
              {'answer_weight': 100, "variables": [{"name": "x", "value": "7"}], "answer_text": 12},
              {'answer_weight': 100, "variables": [{"name": "x", "value": "8"}], "answer_text": 13},
              {'answer_weight': 100, "variables": [{"name": "x", "value": "9"}], "answer_text": 14},
              {'answer_weight': 100, "variables": [{"name": "x", "value": "10"}], "answer_text": 15}]
       question['variables']=[{"name": "x", "min": 0, "max": 10, "scale": 0}]

       #question['formulas']=[{"formula": "5+x"}]
       # note that the above does not work, you have to do it as below:
       question['formulas']=["5+x"]

       question['answer_tolerance']="0"
       question['formula_decimal_places']=0


       question_id=create_quiz_question(course_id, quiz_id, question)

       # add time stamp to log file
       log_time = str(time.asctime(time.localtime(time.time())))
       write_to_log(log_time)   
       write_to_log("\n--DONE--\n\n")

if __name__ == "__main__": main()

maguire
Community Champion

So to be more precise, you put the table that you want the student to get in conjunction with the question into the 'question_text'. You put all of these questions and corresponding answers into a question group from which you tell Canvas to randomly pick one question.

For example, you might ask for a single numeric value with a 

 

 

question['question_type']='numerical_question'

 

 

You can specify whether you want an exact match, approximate answer, precisions, etc. as per https://canvas.instructure.com/doc/api/quiz_questions.html

You have to give the answer to each question when you generate the question.

Now, you just generate a number of questions (each with its own embedded table) and their corresponding answers. 

James
Community Champion

@dmurphy1 

The table that you want to create can be done using the formula question type. The problem is that you only get one question with a formula question type.

If you want to use Classic Quizzes and ask multiple questions, then you will need to use multiple fill in the blank questions. This comes with its own set of problems, such as you have to allow for every possible correct answer (it does a text-only match) so (for example) 19.2 and 19.23 and 19.231 would need to be entered if you wanted to allow the student to enter varying number of decimal places. Also, since it's not a formula question, you cannot generate the random numbers that populates the table. That's where the problem generation that @maguire was mentioning comes in.

In New Quizzes, you can have a stimulus question where you would present the data and then you can have multiple questions that are linked to that. The last I checked (it's been a while), you cannot use a the information from the stimulus question in the individual questions under it. By that, I mean that you could not use a formula question for the stimulus and then make the individual questions know what those values were. So, once again, we're back to creating multiple versions of the questions by hand (or the API). The New Quizzes hasn't released their API yet.

maguire
Community Champion

@dmurphy1 

With Classic Quizzes, I've used the method that @James mentioned to handle multiple questions (i.e., make a fill_in_multiple_blanks question) and simply generated a lot of answers to cover the needed cases to do the exact match against the different values that a user could enter.

ProductPanda
Instructure
Instructure
Status changed to: Archived
Comments from Instructure

As part of the new Ideas & Themes process, all ideas in Idea Conversations were reviewed by the Product Team. Any Idea that was associated with an identified theme was moved to the new Idea & Themes space. Any Idea that was not part of the move is being marked as Archived. This will preserve the history of the conversations while also letting Community members know that Instructure will not explore the request at this time.