cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
timfin
Community Member

extract individual quiz responses

I'm looking for the best way to extract all user quiz responses across my entire organization.  For example, I have a post-course survey that collects feedback in the form of multiple choice questions and essay questions.  I want to see the individual user responses per user and course.  I've tried using {{domain}}/api/v1/courses/:course_id/quizzes/:quiz_id/submissions/:id/events and {{domain}}/api/v1/courses/:course_id/quizzes/:quiz_id/statistics to pull info out but it seems like there must be a better way to get to this data.  Any help would be appreciated, thanks!

4 Replies
DesertPanda
Community Member

I am having the same problem. I know this thread is old but have you got an answer eventually? 

Jeff_F
Community Champion

@timfin @DesertPanda 

I've not been able to do this using CD1 so where needed, we also use the API method to extract the data for the student analysis report and place it into a table. 

Alternative ideas include use of a tool designed for end of course surveys such as EvalKit WaterMark or even a Microsoft Form.  As we have Office 365 for everyone a link to such a form can be created.  The Watermark route is a more secure method of ensuring only those persons enrolled in the course are posting responses.  Another benefit of Watermark is that the responses are anonymous.  

sor1
Community Participant

To retrieve quiz results, I needed to use the API

- List Terms

- List Courses for desired Term

- For each course, get list of Quizzes

- (Determine if the Quiz matched the critiera)

- For each quiz, get submissions

- For each submission, get details  (events)    Note that there might be multiple responses for a question

James
Community Champion

I've replied to this question in other places, but missed seeing it here. Sorry for the delay and to help make up for it, I will try to include a more complete response than I've given in other places.

For classic quizzes, all of the responses for each quiz and for multiple students can be obtained using the Submissions API . The trick is to add the query parameter include[]=submission_history. and to use the assignment_id not the quiz_id.

The answers are coded in that multiple-choice responses don't give the actual response, they give the a code for the response, which is then compared to the questions given to identify the response.

When you include the submission history, you get an array called submission_history that includes an element for each submission made (multiple attempts means multiple submissions). Each of those elements in the array includes a property called submission_data, which is an array with one entry for each question that appears to be in the order that it was asked on the quiz, but I won't state that definitely because I answered the questions in the order they were presented in the example I'm looking at.

For example, here's what response looks like:

{
  "submission_data":[
    {
      "correct":true,
      "points":2,
      "question_id":61125947,
      "answer_id":"5172",
      "text":"red",
      "more_comments":""
    },
    {
      "correct":"partial",
      "points":2,
      "question_id":61125950,
      "text":"",
      "answer_for_verb":"saw",
      "answer_id_for_verb":"1423",
      "answer_for_noun":"texas",
      "answer_id_for_noun":null,
      "more_comments":""
    },
    {
      "correct":"partial",
      "points":2.4,
      "question_id":61125956,
      "text":"",
      "answer_3288":"1",
      "answer_6891":"0",
      "answer_549":"1",
      "answer_5821":"0",
      "answer_5900":"0",
      "more_comments":""
    },
    {
      "correct":false,
      "points":0,
      "question_id":61125957,
      "text":"",
      "answer_for_noun":1572,
      "answer_id_for_noun":1572,
      "answer_for_food":6061,
      "answer_id_for_food":6061,
      "more_comments":""
    },
    {
      "correct":true,
      "points":3,
      "question_id":61125983,
      "answer_id":4705,
      "text":"0.1250",
      "more_comments":""
    },
    {
      "correct":true,
      "points":2,
      "question_id":61125986,
      "answer_id":8243,
      "text":"11.0000",
      "more_comments":""
    },
    {
      "correct":"defined",
      "points":0,
      "question_id":61125987,
      "text":"<p>peace</p>",
      "more_comments":""
    },
    {
      "correct":"defined",
      "points":0,
      "question_id":61125988,
      "attachment_ids":null,
      "more_comments":""
    }
  ]
}

 

Here is what was in the quiz

  1. Fill in the blank question where the student typed "red".
  2. Fill in multiple blanks question that had two blanks called "verb" and "noun". The student's responses were "saw" and "texas". Since there is an answer_id_for_verb that has a value, that was one of the accepted responses. answer_id_for_noun is null, meaning it didn't match.
  3. Multiple answers question where the student selected answer_3288 and answer_549. The student did not select answer_549 (which was correct) or answer_5821 or answer_5900 (both of which were incorrect). There is no way to know from just this data which answers were correct, but the answers appear in the order they were presented to the student.
  4. Multiple dropdowns question. This works like the fill in multiple blanks question where there were two items called "noun" and "food". The existence of an id for the answer_id_for_xxx means they got xxx correct.
  5. Numeric question response where the student responded with 0.1250. Technically, the student typed 0.125 and Canvas added the fourth decimal place.
  6. Formula question where the student answered 11 and Canvas padded it with four zeros.
  7. Essay question where the student typed "peace" and didn't get any points for it but had to be manually graded.
  8. File upload question that the student didn't answer but had to be manually graded. You can see from the data that it is going to include the attachment_ids for the uploaded files.

I just realized that my quiz designed to test all the different types of questions doesn't have any multiple choice questions on it. Can you tell I teach math and not a social science? Anyway, it includes an answer property with an answer_id value that matches the list of possible responses.

All of what I wrote is explained in the documentation, but in the appendix at the bottom of the Quiz Submission Questions API page. The placement of the information is weird since you can't actually get the student responses from this API. You can get the ID of the quiz submission using the Quiz Submissions API (use the id field) and then use that ID in the Quiz Submission Questions API.

A much easier way to get the information about each question is to use the Quiz Questions API and the list the questions used in a quiz or submission endpoint. It is for the entire course rather than individual students so it involves fewer API calls. This uses the quiz_id not the assignment_id. It works most of the time, but it has issues showing information on questions that are linked to a question bank. It may also have problems if the responses were modified (a mistake was corrected), but it does allow for you to specify the quiz_submission_id and quiz_submission_attempt in those cases.