Skip navigation
All Places > Canvas Developers > Blog > 2019 > May
2019

In the 2019-04-20 release notes, one of the bug fixes was: “The Copy a Canvas Course option uses active term dates to display available courses in the drop-down list.” Recently, it came to our attention in Emerson College’s Instructional Technology Group, that this bug fix had the side effect of removing past courses from this drop-down list unless the “Include completed courses” box is checked.

 

The list of courses in the "Select a course" dropdown for copying a Canvas course now changes when "Include completed courses" is checked.

Since we’d all gotten used to past courses appearing in the list whether or not this box was checked, the change caused us to assume that the drop-down was broken. Based on the comments by Christopher Casey, Rick Murch-Shafer, Chris Hofer, and Joni Miller in the release notes thread, we aren't the only ones who ignored this checkbox until now.

 

Almost all of the times our faculty use the course copy tool, it’s to copy from a past semester to the current one. To prevent confusion due to the new functionality, we decided to force the “Include completed courses” to be box checked by default.

 

Demonstration that choosing "Copy a Canvas Course" now results in the "Include completed courses" checkbox being checked by default.

 

Here’s the code I used to make this happen. I’m happy to help others get this working in their custom js files too!

 

Edited to add: Check the comments for more efficient and concise code for this. I'm leaving the original version here for the thought process breakdown.

 

I started by writing a helper function to do the actual work of checking the box:

 

 /*
* Check the "Include completed courses" box on course import screen.
* NOTE: If the checkbox ID changes in future versions of Canvas, this
* code will need to be adjusted as well.
*/


function checkCompletedCourses() {
  var completedBox = document.getElementById("include_completed_courses");

  if ((typeof completedBox !== 'undefined') && (completedBox !== null)) {
    // Set the checkbox value
    completedBox.checked = true;
    // Trigger the change event as if the box was being clicked by the user
    completedBox.click();
  }
}

 

Inside the document ready function in our custom js code file, I already had a variable for running code only on specific pages. I added an additional regular expression to check for the Import Content page in a course.

 

var currentCoursePath = window.location.pathname;
var importPattern = /(\/courses\/[0-9]+\/content_migrations)$/i;

 

Since the “Include completed courses” checkbox doesn’t exist until the “Copy a Canvas Course” option is selected, I set up a MutationObserver to monitor the div that this checkbox gets added to.

 

if (importPattern.test(currentCoursePath)) {
  var importBoxObserver = new MutationObserver(function(mutations) {
    mutations.forEach(function(mutation) {
      checkCompletedCourses();
    });
  });

  importBoxObserver.observe(document.getElementById("converter"), {
    childList: true
  });
}

 

So far this is working for us and we’re hoping it’ll prevent extra pre-semester stress once faculty are back on campus for the Fall.

I'm trying to make standards-based grading more approachable for my teachers. When I was teaching full time, I held to Frank Noschese's Keep It Simple philosophy. Single standards correlate to single assignments that are scored as pass/fail. Now, I averaged these out on a weighted scale to calculate a 0-100 grade, but that's for another post

 

Using Canvas, I was able to set up a functional reassessment strategy to aggregate demonstrations of proficiency.

The Learning Mastery Gradebook in Canvas does not translate anything into the traditional gradebook. This mean that every week or so, I would have to open the Mastery report alongside the traditional gradebook and update scores line by line. This was tedious and prone to error.

 

Using the Canvas API and a MySQL database, I put together a Python web app to do that work for me. The idea is that a single outcome in a Canvas course is linked with a single assignment to be scored as a 1 or 0 (pass/fail) when a mastery threshold is reached.

 

The App

Users are logged in via their existing Canvas account using the OAuth flow. There they are shown a list of active courses along with the number of students and how many Essential Standards are currently being assessed (ie, linked to an assignment).

 

Teacher Dashboard

The teacher dashboard

 

 

Single Course

In the Course view, users select which grading category will be used for the standards. Outcomes are pulled in from the course and stored via their ID number. Assignments from the selected group are imported and added to the dropdown menu for each Outcome.

 

Users align Outcomes to the Assignment they want to be updated in Canvas when the scores are reconciled. This pulls live from Canvas, so the Outcomes and Assignments must exist prior to importing. As Assignments are aligned, they're added to the score report table.

 

Score Reports

Right now, it defaults to a 1 or 0 (pass/fail) if the Outcome score is greater than or equal to 3 (out of 4). All of the grade data is pulled at runtime - no student information is ever stored in the database. The Outcome/Assignment relationship that was created tells the app which assignment to update for which Outcome.

When scores are updated, the entire table is processed. The app pulls data via the API and compares the Outcome score with the Assignment grade. If an Outcome has risen above a 3, the associated Assignment is toggled to a 1. The same is true for the inverse: if an Outcome falls below a 3, the Assignment is toggled back to a 0.

 

I have mixed feelings about dropping a score, but the purpose of this little experiment is to make grade calculations and reconciliation between Outcomes and Assignments much more smooth for the teacher. It requires a user to run (no automatic updates) so grades can always be updated manually by the teacher in Canvas. Associations can also be removed at any time.

 

Improvements

To speed up processing, I use a Pool to run multiple checks at a time. It can process a class of ~30 students in under 10 seconds. I need to add some caching to make that even faster. This does not split students into sections, either. 

 

I've started turning this into an LTI capable app which would make it even easier for teachers to jump in. If you're a Python developer, I would really appreciate some code review. There is definitely some cleanup to be done in the functions and documentation and any insight on the logic would be great.

 

The source for the project is on GitHub.

During 2019 I have been trying to use Canvas to help support the degree project process (for students, faculty, and administrators). One of the latest parts of this effort has been to look at some of the administrative decisions and actions that occur at the start of the process. A document about this can be found at https://github.com/gqmaguirejr/E-learning/blob/master/First-step-in-pictures-20190524.docx (a PDF is attached). The code can be found in SinatraTest21.rb at https://github.com/gqmaguirejr/E-learning. This code makes use of user custom data in conjunction with a dynamic survey (realized via an external LTI tool) and the administrative decision and action part of the process utilizes custom columns in the gradebook and automatically creating sections and adding a given student to the relevant section.

 

The Ruby code in LTI tool uses a token to access the Canvas API from the LTI tool to put values into the custom columns in the gradebook - this is probably not the best approach, but worked for the purpose of this prototype.

James Jones suggested that I should file a blog post about my recent findings/results.

Background

I just recently started with Canvas because Uppsala University has decided to use it as its upcoming LMS platform after a failed attempt with another product. Therefore I had already spent some time with Blackboard and was quite fond of the calculated questions type in quizzes. I quickly found out that Canvas offers essentially the same functionality but a bit less comfortable.

 

Problem

A calculated question or Formula Question as it is called in the interface of Canvas is based on a table of pre-generated variable values and corresponding results. In the general case the variables are defined and the target function is entered using the web interface, then Canvas calculates random number values for the variables and the resulting answer value. However, as the designer you have no possibility to influence the variable values afterwards (unlike in Blackboard where you have a spreadsheet-like interface). Also, in Canvas, the equation cannot be altered once it has been entered - and the supported syntax is not very convenient for more complex problems.
I was also missing the ability to give a relative tolerance for the correct answers in a question, however, I found out that entering a percentage-sign exactly gives this behavior even though it does not seem documented anywhere.

 

Solution or problems?

My hope was then for the API, since it seemed to support the creation of questions. But even though there is a Python library for the purpose of controlling Canvas, many of the functions are not very well documented. My first tries failed miserably but finally I was on the right track.

 

The cause of my problems was that the Canvas API uses different field identifiers and structures when creating a calculated question as when you retrieve the contents of an already existing question, as I of course did in my attempts to reverse-engineer the interface.

 

Working solution

Here is now an example for a working solution to give you full control over the generation of Formula Qeustions using Python and the canvasapi library. The example is in Python 3 and creates a question from the field of electronics - the voltage in a voltage divider. The Python script generates the variables, fills the variables with random numbers from a set of predefined, commonly used values. I tried to write the script more for readability than any pythonic optimization.

from canvasapi import Canvas
import itertools
import random

API_URL = "https://canvas.instructure.com"
API_KEY = <your api key here>

canvas = Canvas(API_URL, API_KEY)

# create a calculated_question
# example of a potential divider
#
#  U2 = U0 * R2 / ( R1 + R2 )
#

E3  = [1, 2, 5]
E6  = [1.0, 1.5, 2.2, 3.3, 4.7, 6.8]
E12 = [1.0, 1.2, 1.5, 1.8, 2.2, 2.7, 3.3, 3.9, 4.7, 5.6, 6.8, 8.2]

coursename = 'test'
quizname   = 'test'

# define the input variable names
#   each variable has its own range, format and scale
#  
variables = \
    [
      {
        'name':   'U0',
        'unit':   'V',
        'format': '{:.1f}',
        'scale':  '1',
        'range':  [1.2, 1.5, 4.5, 9, 12, 24, 48, 110, 220]
      },
      {
        'name':   'R1',
        'unit':   'ohm',
        'format': '{:.1f}',
        'scale':  '1',
        'range':  [ i*j for i, j in itertools.product([10, 100, 1000], E12)]
      },
      {
        'name':   'R2',
        'unit':   'ohm',
        'format': '{:.1f}',
        'scale':  '1',
        'range':  [ i*j for i, j in itertools.product([10, 100, 1000], E12)]
      },
    ]

# how many sets of answers
rows = 30

# create an empty list of lists (array) for the values
values = [ [ i for i in range(len(variables))] for _ in range(rows)]

# create an empty list for the calculated results
results = [i for i in range(rows)]

# fill the array of input values with random choices from the given ranges
for i in range(rows):
    for j in range(len(variables)):
        values[i][j] = random.choice(variables[j].get('range'))

    # and calculate the result value   
    results[i] = values[i][0] * values[i][2] / (values[i][1]+values[i][2])

# format the text field for the question
#   an HTML table is created which presents the variables and their values
question_text = '<p><table border="1"><tr><th></th><th>value</th><th>unit</th></tr>';
for j in range(len(variables)):
    question_text += '<tr>'
    question_text += '<td style="text-align:center;">' + variables[j].get('name') + '</td>'
    question_text += '<td style="text-align:right;">[' + variables[j].get('name') + ']</td>'
    question_text += '<td style="text-align:center;">' + variables[j].get('unit') + '</td>'
    question_text += '</tr>'
question_text += '</table></p>'

# format the central block of values and results
answers = []
for i in range(rows):
    answers.append(\
        {
          'weight': '100',
          'variables':
          [
            {
              'name': variables[j].get('name'),
              'value': variables[j].get('format').format(values[i][j])
            } for j in range(len(variables))
          ],
          'answer_text': '{:.5g}'.format(results[i])
        })

# format the block of variables,
#   'min' and 'max' do not matter since the values are created inside the script
#   'scale' determines the decimal places during output 
variables_block = []
for j in range(len(variables)):
    variables_block.append(\
        {
          'name':  variables[j].get('name'),
          'min':   '1.0',
          'max':   '10.0',
          'scale': variables[j].get('scale')
        })

# put together the structure of the question
new_question = \
    {
      'question_name':           'Question 6',
      'question_type':           'calculated_question',
      'question_text':           question_text,
      'points_possible':         '1.0',
      'correct_comments':        '',
      'incorrect_comments':      '',
      'neutral_comments':        '',
      'correct_comments_html':   '',
      'incorrect_comments_html': '',
      'neutral_comments_html':   '',
      'answers':                 answers,
      'variables':               variables_block,
      'formulas':                ['automated by python'],
      'answer_tolerance':        '5%',
      'formula_decimal_places':  '1',
      'matches':                 None,
      'matching_answer_incorrect_matches': None,
    }
                                 

courses  = canvas.get_courses()
for course in courses:
    if course.name.lower() == coursename.lower():
        print('found course')
        quizzes = course.get_quizzes()
        for quiz in quizzes:
            if quiz.title.lower() == quizname.lower():
                print('found quiz')

                question = quiz.create_question(question = new_question)      
       

Since this is mostly the result of successful reverse engineering and not based on the actual source code of Canvas the above example should perhaps be used with care, but for me it is what I needed to create usable questions for my students. Perhaps this could also serve the developers as an example on how the interface for calculated questions could be improved in the future.

 

How does it work?

The dictionary variables (lines 26-49) contains the names and ranges of the variables, as well as formatting instructions. The ranges are given as lists. In lines 61-66 the random values are generated and the results calculated from these values. Lines 70-77 create a rudimentary table to be included in the question text containing the variables and their values as well as physical units for this particular question. Lines 80-93 finally assemble the variable/answer block and lines 109-128 put everything together into the dictionary to create a new question.

The script then inserts the question into an existing quiz in an existing course in line 140.

 

After running the script

This screenshot shows the inserted question after running the script, obviously this would need some more cosmetics.

inserted question inside the quiz after executing the script

And when editing the question this is what you see:

editing the question

Be careful not to touch the variables or the formula section since this will reset the table values.

 

Cosmetics

In order to be presentable to the students the above questions needs some cosmetics. What is to be calculated? Perhaps insert a picture or an equation? More text?

after editing, but still inside the editor

After updating the question and leaving the editor it now looks like this in the Canvas UI:

the modified question inside the quiz

 

Seeing and answering the question

When you now start the quiz, this is how the question looks:

the question as it is seen by the student

Summary

  • calculated_questions can be generated using the Python canvasapi library
  • answer values have to be provided with the key 'answer-text'
    'answers': [
       {
         'weight': '100',
         'variables': [
         {'name': 'U0', 'value': '9.0'},
         {'name': 'R1', 'value': '5600.0'},
         {'name': 'R2', 'value': '5600.0'}],
         'answer_text': '4.5'},

     

  • when querying an existing calculated_question through the API the answer values are found with the key 'answer'
    answers=[
        {'weight': 100,
         'variables': [
          {'name': 'U0', 'value': '110.0'},
          {'name': 'R1', 'value': '82.0'},
          {'name': 'R2', 'value': '8200.0'}],
         'answer': 108.91,
         'id': 3863},

     

  • when supplying an equation for the 'formular' field this has to be done in a list, not a dictionary
     'formulas':  ['a*b'],

     

  • when querying an existing calculated_question through the API the equations are found in a dictionary like this:
     formulas=[{'formula': 'a*b'}],