Converting Quiz format

Jump to solution
Nva2023
Community Explorer

Hi All and @James 

We have a lot of essay (one-sentence questions, filling blank space) and fill-in-the-blank quiz-type questions.

Now, we need to convert them to a mix-and-match question-type quiz.

Is there a tool that can convert matching questions to QTI format?

Any ideas on how to do it - except manually? 🙂

0 Likes
1 Solution
James
Community Champion

@Nva2023 

Thanks for clarifying what you are attempting to do. Like I said, I have only used Respondus, but there were  some free alternatives to it listed when I did a search. Some of those looked like they would take your type of file (which is not a QTI file). 

I can talk about the API, though.

You can tell there's a huge difference between what you're sending and what Canvas is creating. Canvas sends a lot more than is absolutely necessary.

It almost looks like you didn't include all of what Canvas sent, either, as you're wrapping it in an array rather than an object. That makes me wonder if you're including all of what you're sending, either.

The other thing is that you're showing me the results of a GET from Canvas (or the results after a POST) rather than they payload of the POST itself. The fields are different and you cannot take a GET (or what the POST returns) and blindly insert it into a new question. For example, comments_html in the answer section is called answer_comment_html in the POST. left is called answer_match_left, and so on.

The most common error I see with this is that people forget to wrap the request in a question property.

You can see that when you look at the Create a single quiz question API endpoint documentation.

Try wrapping it like this (of course, comments are not allowed in JSON, I'm just adding that for explanation purposes.

 

{
  "question": {
    // put your existing object in here
  }
}

 

 

In fact, I took your request, wrapped it in the question property and sent it to Canvas. It added a new question with one item. It did not add the distractors or some of your text, though.

The appendices that explain the format for each type of question is buried in the Quiz Submission Questions page. The documentation for Matching Questions (perhaps others) isn't very helpful. It makes it sound like you would need to know the match IDs, but there's no way to know those when you are creating a question, just getting or updating it.

The reason why it didn't add the distractors or text is because of where you have it. You have it for a specific response (inside the array) rather than with the question. When I look at the output sent by Canvas (the payload of the POST), then matching_answer_incorrect_matches and text_after_answers goes with the question, not the answers. I do not see where text_after_answers is used for the matching question type, though.

The overall question comments go in correct_comments_html, incorrect_comments_html, or neutral_comments_html, which fall under the question property (not the answer).

For item-specific response, you need to use answer_comment or answer_comment_html within the answer property. Note that you misspelled it answer_comments (plural). That said, it did not work for me with answer_comment, only with answer_comment_html.

answer_weight is not needed since you can only put in correct responses. If you don't include points_possible, then it sets the points to 0.

Here is a MWE (minimal working example) based off what you had. I changed the comments a little.

{
  "question": {
    "question_type": "matching_question",
    "question_text": "Match the correct name to the discovery or theory.",
    "question_name": "Matching Question",
    "matching_answer_incorrect_matches": "Nevada\nCalifornia\nWashington",
    "answers": [
      {
        "answer_match_left": "Salt Lake City",
        "answer_match_right": "Utah",
        "answer_comment_html": "<p>Salt Lake City is the capital of Utah.</p>"
      }],
    "points_possible": 1
  }
}

 

I'm going to expand the MWE in case other people stumble across this and ask what good a matching question is with just one blank as that's essentially a multiple choice question except you can add feedback to incorrect responses with multiple choice questions.

Here's what it would look like with another capital.

{
  "question": {
    "question_type": "matching_question",
    "question_text": "Match the capital with the state.",
    "question_name": "State Capitals",
    "matching_answer_incorrect_matches": "Nevada\nCalifornia\nWashington",
    "answers": [
      {
        "answer_match_left": "Salt Lake City",
        "answer_match_right": "Utah",
        "answer_comment_html": "<p>Salt Lake City is the capital of Utah.</p>"
      },
      {
        "answer_match_left": "Springfield",
        "answer_match_right": "Illinois",
        "answer_comment_html": "<p>Springfield is the capital of Illinois.</p>"
      }],
    "points_possible": 1
  }
}

 

View solution in original post

0 Likes