Skip navigation
All Places > Canvas Developers > Blog
1 2 3 Previous Next

Canvas Developers

66 posts

Embulk is an open-source bulk data loader that helps data transfer between various databases, storages, file formats, and cloud services. embulk.org github contributors

 

Simply put, Embulk makes importing gzipped CSV files into any RDBMS* and managing the data and workflow necessary for Canvas Data using command line tools easy, really easy, specifically solving issues we experience working with Canvas Data without fancier tools.

 

with support for

Linux, OSX, Windows https://github.com/embulk/embulk#quick-start

MySQL, MS SQL Server, Oracle, PostgreSQL, RedShift https://github.com/embulk/embulk-output-jdbc

 

* Embulk goes beyond SQL, List of Embulk Plugins by Category

 

and features useful for Canvas Data

  • Decode gzipped files
  • The ability to intelligently guess the format and data types of CSV files
  • Parallel execution of tasks, multi-threading per CPU core, and a task for each batch file
  • Input CSV Plugin as default Input for Embulk
  • Filter data with Filter Plugins, https://plugins.embulk.org/#filter
  • Output Data to SQL
    • Insert, Insert Direct, Replace, Merge, Truncate and Truncate Insert
    • Timestamp formatting
    • TimeZone conversion from UTC for date time columns
    • before_load and after_load, config options to run queries before (truncate) and after import (indexes)
    • and more

 

Embulk uses YAML config files for each task, for Canvas Data this means each input source (table files) and it's output destination (db table) is 1 file. This includes differences between staging, test and production destinations. I imagine your workflow and setup will be different than mine and many others. You may only need a few tables, or only have one database, or you could simply use Embulk to manage, manipulate, filter and possibly join CSV files to examine with Tableau if that's your thing. For this reason, I have only shared each set of config files for MySQL, MSSQL, Oracle, and PostgreSQL. I have not worked with RedShift.

 

Our old workflow, requires that we attempt to maintain the newest data from Canvas Data for reporting, attendance, API services and automation, and LTIs. One of our biggest issues is the size of the daily batch without deltas and the growing use of Canvas within our schools and how long importing everything can take, how slow and unnecessary it is to hold 6 years worth of data for this semester, tried different things in SQL and bash to limit the data quickly for the current school year in production, never implement. LTI queries for attendance and submissions are really slow. Then some days the downloaded files are 0 bytes, we must have lost internet, or there was duplicates and the table didn't load, and it takes until 2pm to get everything loaded. Sometimes there's new columns in the table and I forgot to read the release notes and we've truncated the table before importing, and it takes hours to import. And so on.

 

Some of these are human, some of these are manageable.

 

Our new workflow uses Embulk

  1. Download with Canvas Data CLI, some of that documented here
  2. Import all CD tables using CSV in SQL out to staging environment with Replace mode, this creates temporary tables for the import, if it fails, the previous version is still intact. After successful import, Embulk will drop the old table and run the after_load queries, I use this for enumerable constraints and indexes. I left a lot of examples in the configs.

    The Requests table config uses Insert mode to append the new rows.
  3. I use staging for Tableau reporting. For production, I only need to load the tables necessary for our LTIs and API services. Some of these configs are straight copies of the staging imports, except they point to production. Some of the configs create new tables using SQL in SQL out and importing filtered or composite tables from query results using https://github.com/embulk/embulk-input-jdbc

    heres' an example https://github.com/ccsd/canvas-data-embulk-configs/wiki/SQL-in-SQL-out

 

Using RHEL7, 6 CPUs with 12 cores, and 16GB Ram, Embulk imports 7.9GB of CSVs into >1TB of SQL (no requests) in less than 4.5 hours, depending on which indexes you keep in the configs.

 

GitHub - ccsd/canvas-data-embulk-configs: YAML configs for importing Canvas Data with Embulk

 

Poh Duong

Ruby GEM for REST calls

Posted by Poh Duong Jul 3, 2019

The University of Adelaide has built a REST Client GEM as part of one of our integration projects.

 

The GEM has now been open sourced and available via https://rubygems.org. The source code can be found at https://github.com/universityofadelaide/rest-client-wrapper.

 

The GEM main features are:

  • Retries for API calls
  • Ability to set authentication types and automatically populating the request with auth parameters
  • Re-auth for: OAUTH
  • Getting all data for paginated endpoints. (Only if the API implements pagination with headers links)

 

The GEM is built as a wrapper around the rest-client GEM (https://github.com/rest-client/rest-client)

Introduction

Canvas Data provides a wealth of information that can be used in many interesting ways, but there are a few hurdles that can make it hard to even get started:

  • The Canvas Data API uses a different authentication mechanism than the one that you're probably already used to using with the Canvas API.
  • Data is provided in compressed tab-delimited files. To be useful, they typically need to be loaded into some kind of database.
  • For larger tables, data is split across multiple files, each of which must be downloaded and loaded into your database.
  • The size of the data can be unwieldy for use with locally-running tools such as Excel.
  • Maintaining a local database and keeping up with schema changes can be tedious.

 

This tutorial will show you how to build a data warehouse for your Canvas Data using Amazon Web Services. Besides solving all of the problems above, this cloud-based solution has several advantages compared to maintaining a local database:

  • You can easily keep your warehouse up to date by scheduling the synchronization process to run daily.
  • You can store large amounts of data very cheaply.
  • There are no machines to maintain, operating systems to patch, or software to upgrade.
  • You can easily share your Canvas Data Warehouse with colleagues and collaborators.

 

Before we get started

Before you begin this tutorial, you should make sure that you've got an AWS account available, and that you have administrator access. You'll also need an API key and secret for the Canvas Data API.

 

Experience with relational databases and writing SQL will be necessary in order to query your data. Experience with AWS and the AWS console will be helpful.

 

Be aware that you'll be creating AWS resources in your account that are not free, but the total cost to run this data warehouse should be under about $10/month (based on my current snapshot size of 380GB). There's also a cost associated with the queries that you run against this data, but typical queries will only cost pennies.

 

All of the code used in this tutorial can be found in GitHub:
GitHub - Harvard-University-iCommons/canvas-data-aws: Build a Canvas Data warehouse on AWS 

 

AWS services we'll use

We'll use several different AWS services to build the warehouse:

  • S3: we'll store all of the raw data files in an S3 bucket. Since S3 buckets are unlimited in size and extremely durable, we won't need to worry about running out of space or having a hard drive fail.
  • Lambda: we'll use serverless Lambda functions to synchronize files to the S3 bucket. Since we can launch hundreds or even thousands of Lambda functions in parallel, downloading all of our files is very fast.
  • SNS: we'll use the Simple Notification Service to let us know when the synchronization process runs.
  • Glue: we'll create a data catalog that describes the contents of our raw files. This creates a "virtual database" of tables and columns.
  • Athena: we'll use this analytics tool along with the Glue data catalog to query the data files directly without having to load them into a database first
  • CloudFormation: we'll use AWS' infrastructure automation service to set up all of the pieces above in a few easy steps!

 

Let's build a warehouse!

  1. Log into the AWS console and access the CloudFormation service.
  2. Click on the Create Stack button
  3. On the next screen, leave the Template is ready and Amazon S3 URL options selected. Below, Enter this S3 URL:
    https://huit-at-public-build-artifacts.s3.amazonaws.com/canvas-data-aws/canvas_data_aws.yaml
    Click Next. 
  4. On the stack details screen, first enter a name for this stack. Something like "canvas-data-warehouse" is fine. Enter your Canvas Data API key and secret in the fields provided. Enter your email address (so that you can receive updates when the synchronization process runs). You can leave the default values for the other parameters. Click Next.
  5. On the stack options screen, leave all of the default values and click Next.
  6. On the review screen, scroll to the bottom and check the box to acknowledge that the template will create IAM resources (roles, in this case). Click the Create stack button, and watch as the process begins!

 

It'll take several minutes for all of the resources defined in the CloudFormation template to be created. You can follow the progress on the Events tab. Once the stack is complete, check your email -- you should have received a message from SNS asking you to confirm your subscription. Click on the link in the email and you'll be all set to receive updates from the data-synchronization process.

 

Now we're ready to load some data!

 

Loading data into the warehouse

Instructure's documentation for the Canvas Data API describes an algorithm for maintaining a snapshot of your current data:

  1. Make a request to the "sync" API endpoint, and for every file returned:
    • If the filename has been downloaded previously, do not download it
    • If the filename has not yet been downloaded, download it
  2. After all files have been processed:
    • Delete any local file that isn't in the list of files from the API

 

The CloudFormation stack that you just created includes an implementation of this algorithm using Lambda functions. A scheduled job will run the synchronization process every day at 10am UTC, but right now we don't want to wait -- let's manually kick off the synchronization process and watch the initial set of data get loaded into our warehouse.

 

To do that, we just need to manually invoke the sync-canvas-data-files function. Back in the AWS console, access the Lambda service. You'll see the two functions that are used by our warehouse listed -- click on the sync-canvas-data-files function.

 

On this screen you can see the details about the Lambda function. We can use the AWS Lambda Console's test feature to invoke the function. Click on the Configure test events button, enter a name for your test event (like "manual"), and click Create. Now click on the Test button, and your Lambda function will be executed. The console will show an indication that the function is running, and when it's complete you'll see the results. You'll also receive the results in your email box.

 

Querying your data

When the Lambda function above ran, in addition to downloading all of the raw data files, it created tables in our Glue data catalog making them queryable in AWS Athena. In the AWS console, navigate to the Athena service.  You should see something similar to the screenshot below:

 

 

You can now write SQL to query your data just as if it had been loaded into a relational database. You'll need to understand the schema, and Instructure provides documentation explaining what each table contains: https://portal.inshosteddata.com/docs

 

Some example queries:

  • Get the number of courses in each workflow state:
    SELECT workflow_state, count(*) FROM course_dim GROUP BY workflow_state;
  • Get the average number of published assignments per course in your active courses:
    SELECT AVG(assignments) FROM (SELECT COUNT(*) AS assignments 
    FROM course_dim c, assignment_dim a
    WHERE c.id = a.course_id
    AND c.workflow_state = 'available'
    AND a.workflow_state = 'published'
    GROUP BY c.id);

 

Cleaning up

If you don't want to keep your data warehouse, cleaning up is easy: just delete the "raw_files" folder from your S3 bucket, and then delete the stack in the CloudFormation console. All of the resources that were created will be removed, and you'll incur no further costs. 

 

Good luck, and please let me know if you run into any trouble with any of the steps above!

New FERPA requirements for cross-listed courses! and others have commented on the problems of cross listing. However, the ability to do cross-listing is controlled by the same permission as being able to create/edit/delete sections. I find this to be odd. I think that the ability to cross-list should not be tied to the ability to utilize sections within a course.

 

As the app/controllers/sections_controller.rb has the following code (with the relevant line highlighted):

  # @API Cross-list a Section
  # Move the Section to another course.  The new course may be in a different account (department),
  # but must belong to the same root account (institution).
  #
  # @returns Section
  def crosslist
    @new_course = api_find(@section.root_account.all_courses.not_deleted, params[:new_course_id])
    if authorized_action(@section, @current_user, :update) && authorized_action(@new_course, @current_user, :manage)   
      @section.crosslist_to_course(@new_course, updating_user: @current_user)
      respond_to do |format|
        flash[:notice] = t('section_crosslisted', "Section successfully cross-listed!")
        format.html { redirect_to named_context_url(@new_course, :context_section_url, @section.id) }
        format.json { render :json => (api_request? ? section_json(@section, @current_user, session, []) : @section) }
      end
    end
  end

The check for authorized actions means that:

  • Unless the current user has the ability to update sections, the cross listing will not occur.
  • Unless the current user can manage the target course otherwise, the cross-listing will not occur.

Unfortunately, cross-listing provides a sneak path to add students to a course (as a teacher without administrative rights, but with the ability to create sections does a cross-listing - the students will be added to the target course - despite the fact that teacher does not have rights to add students to the course).

 

Moreover, if the current user can manually add students to the target course, then they can always add each of the students from a section to the target course. This means that there needs to be a check in the above code on whether the current user can enroll students in the target course. 

 

 

Therefore, the second test should be something similar to:

authorized_action(@new_course, @current_user, :manage)  && authorized_action(@new_course, @current_user, :manage_students) 

 

Where (based on looking at ./app/models/role_override.rb and spec/apis/v1/enrollments_api_spec.rb) :manage_students would enable the creation of student enrollments. Thus unless the user is allowed to enroll students in the target course, the cross-listing would not occur.

 

If the permission to add students ( permission: Users - add / remove students in courses) to imported courses (i.e., courses that are automatically created by SIS  imports) is disabled for the "Teacher" role, there should not be a problem in allowing teachers to create/edit/delete sections - while still meeting FERPA and other similar regulations (as there would not be any ability to cross-list a section worth of students and each section would only be within a single course). In fact, the use of sections could be used to reduce the visibility and interactions of students (and even teachers) to within their section, thus advancing student's privacy.

David Lyons

Module Filters

Posted by David Lyons Employee Jun 11, 2019

Disclaimer: This is a personal project, and is not endorsed by Instructure or Canvas LMS. Custom JavaScript must be maintained by the institution.

Most (great) product features come from a deep understanding of customers’ problems. It’s tempting to build every “good” or “obvious” feature someone can describe passionately but that leads to thoughtless bloat that breaks the UX. And most things people describe as “obvious” actually have 10,000 questions between the comment and a well researched/tested feature.

Sometimes the stars align and a conversation with an insightful person includes an offhanded “wouldn’t it be neat” comment that’s small enough to quickly prototype and test. And those are just the circumstances that led to this experiment: Module Filters.

Behold! Content filters!

The comment, which was part of a much larger conversation on organization and navigation, was

“Wouldn’t it be neat if you could filter by content type right on the Modules page in Canvas?”

and I agreed. Because Canvas supports custom JavaScript I was able to quickly mockup a functioning prototype for all-important testing and validation.

This project was a good candidate for me to experiment with because it’s

  1. small in scope
  2. technically possible
  3. UI/UX not immediately obvious

Small in Scope

Small changes a person/team can wrap their hands all the way around are ideal for quality, and ensuring it actually addresses the problem. Feature creep is very real though, and I had to repeatedly slap my own hand and say “No! That’s not part of what is being tested here!” Keeping things in scope is tough in the face of the endless waterfall of “wouldn’t it be neat if it also…

Technically Possible

What I mean by technically possible is that 1. the idea is literally possible at all and 2. within my ability to develop. JavaScript is great for uses exactly like this and Canvas allows for this kind of thing, and while the scope of the idea is small, if I knew nothing about HTML/CSS/JavaScript and had to learn all of that first the overall project would have been a somewhat larger commitment.

UI/UX

This is where the bulk of the work (and my excitement for the idea) went. “Filters” in apps don’t have a universal UI: sometimes they’re checkboxes, or a dropdown menu, or toggles, or happen automatically while typing, etc. None of those is right or wrong, and it depends on the situation which direction one might lean. My first version actually uses unstyled checkboxes with labels (which looked awful) just to make sure my code worked. Thinking about the UI/UX also helped me with feature creep in that the UI for a filter like content works well as checkboxes because a user might want any number of filter combinations on/off, but they wouldn’t work well to toggle a single binary state like “has due date”, for example. One might even want different types of filter simultaneously which requires a lot of additional considerations.

Ultimately I settled on an on/off toggle using the corresponding content icon instead of a checkbox with a label to support any combination of content types to be shown/hidden, and to avoid adding text to the app UI. Keeping the filters to just content type made the UI more approachable and let me focus on the UX of how it might be to actually use this feature.

Try It and Tell Me What You Think

I put the code on Github with an MIT license. If you play with it I’d love to hear your thoughts either on the repo or on twitter.

In the 2019-04-20 release notes, one of the bug fixes was: “The Copy a Canvas Course option uses active term dates to display available courses in the drop-down list.” Recently, it came to our attention in Emerson College’s Instructional Technology Group, that this bug fix had the side effect of removing past courses from this drop-down list unless the “Include completed courses” box is checked.

 

The list of courses in the "Select a course" dropdown for copying a Canvas course now changes when "Include completed courses" is checked.

Since we’d all gotten used to past courses appearing in the list whether or not this box was checked, the change caused us to assume that the drop-down was broken. Based on the comments by Christopher Casey, Rick Murch-Shafer, Chris Hofer, and Joni Miller in the release notes thread, we aren't the only ones who ignored this checkbox until now.

 

Almost all of the times our faculty use the course copy tool, it’s to copy from a past semester to the current one. To prevent confusion due to the new functionality, we decided to force the “Include completed courses” to be box checked by default.

 

Demonstration that choosing "Copy a Canvas Course" now results in the "Include completed courses" checkbox being checked by default.

 

Here’s the code I used to make this happen. I’m happy to help others get this working in their custom js files too!

 

Edited to add: Check the comments for more efficient and concise code for this. I'm leaving the original version here for the thought process breakdown.

 

I started by writing a helper function to do the actual work of checking the box:

 

 /*
* Check the "Include completed courses" box on course import screen.
* NOTE: If the checkbox ID changes in future versions of Canvas, this
* code will need to be adjusted as well.
*/


function checkCompletedCourses() {
  var completedBox = document.getElementById("include_completed_courses");

  if ((typeof completedBox !== 'undefined') && (completedBox !== null)) {
    // Set the checkbox value
    completedBox.checked = true;
    // Trigger the change event as if the box was being clicked by the user
    completedBox.click();
  }
}

 

Inside the document ready function in our custom js code file, I already had a variable for running code only on specific pages. I added an additional regular expression to check for the Import Content page in a course.

 

var currentCoursePath = window.location.pathname;
var importPattern = /(\/courses\/[0-9]+\/content_migrations)$/i;

 

Since the “Include completed courses” checkbox doesn’t exist until the “Copy a Canvas Course” option is selected, I set up a MutationObserver to monitor the div that this checkbox gets added to.

 

if (importPattern.test(currentCoursePath)) {
  var importBoxObserver = new MutationObserver(function(mutations) {
    mutations.forEach(function(mutation) {
      checkCompletedCourses();
    });
  });

  importBoxObserver.observe(document.getElementById("converter"), {
    childList: true
  });
}

 

So far this is working for us and we’re hoping it’ll prevent extra pre-semester stress once faculty are back on campus for the Fall.

I'm trying to make standards-based grading more approachable for my teachers. When I was teaching full time, I held to Frank Noschese's Keep It Simple philosophy. Single standards correlate to single assignments that are scored as pass/fail. Now, I averaged these out on a weighted scale to calculate a 0-100 grade, but that's for another post

 

Using Canvas, I was able to set up a functional reassessment strategy to aggregate demonstrations of proficiency.

The Learning Mastery Gradebook in Canvas does not translate anything into the traditional gradebook. This mean that every week or so, I would have to open the Mastery report alongside the traditional gradebook and update scores line by line. This was tedious and prone to error.

 

Using the Canvas API and a MySQL database, I put together a Python web app to do that work for me. The idea is that a single outcome in a Canvas course is linked with a single assignment to be scored as a 1 or 0 (pass/fail) when a mastery threshold is reached.

 

The App

Users are logged in via their existing Canvas account using the OAuth flow. There they are shown a list of active courses along with the number of students and how many Essential Standards are currently being assessed (ie, linked to an assignment).

 

Teacher Dashboard

The teacher dashboard

 

 

Single Course

In the Course view, users select which grading category will be used for the standards. Outcomes are pulled in from the course and stored via their ID number. Assignments from the selected group are imported and added to the dropdown menu for each Outcome.

 

Users align Outcomes to the Assignment they want to be updated in Canvas when the scores are reconciled. This pulls live from Canvas, so the Outcomes and Assignments must exist prior to importing. As Assignments are aligned, they're added to the score report table.

 

Score Reports

Right now, it defaults to a 1 or 0 (pass/fail) if the Outcome score is greater than or equal to 3 (out of 4). All of the grade data is pulled at runtime - no student information is ever stored in the database. The Outcome/Assignment relationship that was created tells the app which assignment to update for which Outcome.

When scores are updated, the entire table is processed. The app pulls data via the API and compares the Outcome score with the Assignment grade. If an Outcome has risen above a 3, the associated Assignment is toggled to a 1. The same is true for the inverse: if an Outcome falls below a 3, the Assignment is toggled back to a 0.

 

I have mixed feelings about dropping a score, but the purpose of this little experiment is to make grade calculations and reconciliation between Outcomes and Assignments much more smooth for the teacher. It requires a user to run (no automatic updates) so grades can always be updated manually by the teacher in Canvas. Associations can also be removed at any time.

 

Improvements

To speed up processing, I use a Pool to run multiple checks at a time. It can process a class of ~30 students in under 10 seconds. I need to add some caching to make that even faster. This does not split students into sections, either. 

 

I've started turning this into an LTI capable app which would make it even easier for teachers to jump in. If you're a Python developer, I would really appreciate some code review. There is definitely some cleanup to be done in the functions and documentation and any insight on the logic would be great.

 

The source for the project is on GitHub.

During 2019 I have been trying to use Canvas to help support the degree project process (for students, faculty, and administrators). One of the latest parts of this effort has been to look at some of the administrative decisions and actions that occur at the start of the process. A document about this can be found at https://github.com/gqmaguirejr/E-learning/blob/master/First-step-in-pictures-20190524.docx (a PDF is attached). The code can be found in SinatraTest21.rb at https://github.com/gqmaguirejr/E-learning. This code makes use of user custom data in conjunction with a dynamic survey (realized via an external LTI tool) and the administrative decision and action part of the process utilizes custom columns in the gradebook and automatically creating sections and adding a given student to the relevant section.

 

The Ruby code in LTI tool uses a token to access the Canvas API from the LTI tool to put values into the custom columns in the gradebook - this is probably not the best approach, but worked for the purpose of this prototype.

James Jones suggested that I should file a blog post about my recent findings/results.

Background

I just recently started with Canvas because Uppsala University has decided to use it as its upcoming LMS platform after a failed attempt with another product. Therefore I had already spent some time with Blackboard and was quite fond of the calculated questions type in quizzes. I quickly found out that Canvas offers essentially the same functionality but a bit less comfortable.

 

Problem

A calculated question or Formula Question as it is called in the interface of Canvas is based on a table of pre-generated variable values and corresponding results. In the general case the variables are defined and the target function is entered using the web interface, then Canvas calculates random number values for the variables and the resulting answer value. However, as the designer you have no possibility to influence the variable values afterwards (unlike in Blackboard where you have a spreadsheet-like interface). Also, in Canvas, the equation cannot be altered once it has been entered - and the supported syntax is not very convenient for more complex problems.
I was also missing the ability to give a relative tolerance for the correct answers in a question, however, I found out that entering a percentage-sign exactly gives this behavior even though it does not seem documented anywhere.

 

Solution or problems?

My hope was then for the API, since it seemed to support the creation of questions. But even though there is a Python library for the purpose of controlling Canvas, many of the functions are not very well documented. My first tries failed miserably but finally I was on the right track.

 

The cause of my problems was that the Canvas API uses different field identifiers and structures when creating a calculated question as when you retrieve the contents of an already existing question, as I of course did in my attempts to reverse-engineer the interface.

 

Working solution

Here is now an example for a working solution to give you full control over the generation of Formula Qeustions using Python and the canvasapi library. The example is in Python 3 and creates a question from the field of electronics - the voltage in a voltage divider. The Python script generates the variables, fills the variables with random numbers from a set of predefined, commonly used values. I tried to write the script more for readability than any pythonic optimization.

from canvasapi import Canvas
import itertools
import random

API_URL = "https://canvas.instructure.com"
API_KEY = <your api key here>

canvas = Canvas(API_URL, API_KEY)

# create a calculated_question
# example of a potential divider
#
#  U2 = U0 * R2 / ( R1 + R2 )
#

E3  = [1, 2, 5]
E6  = [1.0, 1.5, 2.2, 3.3, 4.7, 6.8]
E12 = [1.0, 1.2, 1.5, 1.8, 2.2, 2.7, 3.3, 3.9, 4.7, 5.6, 6.8, 8.2]

coursename = 'test'
quizname   = 'test'

# define the input variable names
#   each variable has its own range, format and scale
#  
variables = \
    [
      {
        'name':   'U0',
        'unit':   'V',
        'format': '{:.1f}',
        'scale':  '1',
        'range':  [1.2, 1.5, 4.5, 9, 12, 24, 48, 110, 220]
      },
      {
        'name':   'R1',
        'unit':   'ohm',
        'format': '{:.1f}',
        'scale':  '1',
        'range':  [ i*j for i, j in itertools.product([10, 100, 1000], E12)]
      },
      {
        'name':   'R2',
        'unit':   'ohm',
        'format': '{:.1f}',
        'scale':  '1',
        'range':  [ i*j for i, j in itertools.product([10, 100, 1000], E12)]
      },
    ]

# how many sets of answers
rows = 30

# create an empty list of lists (array) for the values
values = [ [ i for i in range(len(variables))] for _ in range(rows)]

# create an empty list for the calculated results
results = [i for i in range(rows)]

# fill the array of input values with random choices from the given ranges
for i in range(rows):
    for j in range(len(variables)):
        values[i][j] = random.choice(variables[j].get('range'))

    # and calculate the result value   
    results[i] = values[i][0] * values[i][2] / (values[i][1]+values[i][2])

# format the text field for the question
#   an HTML table is created which presents the variables and their values
question_text = '<p><table border="1"><tr><th></th><th>value</th><th>unit</th></tr>';
for j in range(len(variables)):
    question_text += '<tr>'
    question_text += '<td style="text-align:center;">' + variables[j].get('name') + '</td>'
    question_text += '<td style="text-align:right;">[' + variables[j].get('name') + ']</td>'
    question_text += '<td style="text-align:center;">' + variables[j].get('unit') + '</td>'
    question_text += '</tr>'
question_text += '</table></p>'

# format the central block of values and results
answers = []
for i in range(rows):
    answers.append(\
        {
          'weight': '100',
          'variables':
          [
            {
              'name': variables[j].get('name'),
              'value': variables[j].get('format').format(values[i][j])
            } for j in range(len(variables))
          ],
          'answer_text': '{:.5g}'.format(results[i])
        })

# format the block of variables,
#   'min' and 'max' do not matter since the values are created inside the script
#   'scale' determines the decimal places during output 
variables_block = []
for j in range(len(variables)):
    variables_block.append(\
        {
          'name':  variables[j].get('name'),
          'min':   '1.0',
          'max':   '10.0',
          'scale': variables[j].get('scale')
        })

# put together the structure of the question
new_question = \
    {
      'question_name':           'Question 6',
      'question_type':           'calculated_question',
      'question_text':           question_text,
      'points_possible':         '1.0',
      'correct_comments':        '',
      'incorrect_comments':      '',
      'neutral_comments':        '',
      'correct_comments_html':   '',
      'incorrect_comments_html': '',
      'neutral_comments_html':   '',
      'answers':                 answers,
      'variables':               variables_block,
      'formulas':                ['automated by python'],
      'answer_tolerance':        '5%',
      'formula_decimal_places':  '1',
      'matches':                 None,
      'matching_answer_incorrect_matches': None,
    }
                                 

courses  = canvas.get_courses()
for course in courses:
    if course.name.lower() == coursename.lower():
        print('found course')
        quizzes = course.get_quizzes()
        for quiz in quizzes:
            if quiz.title.lower() == quizname.lower():
                print('found quiz')

                question = quiz.create_question(question = new_question)      
       

Since this is mostly the result of successful reverse engineering and not based on the actual source code of Canvas the above example should perhaps be used with care, but for me it is what I needed to create usable questions for my students. Perhaps this could also serve the developers as an example on how the interface for calculated questions could be improved in the future.

 

How does it work?

The dictionary variables (lines 26-49) contains the names and ranges of the variables, as well as formatting instructions. The ranges are given as lists. In lines 61-66 the random values are generated and the results calculated from these values. Lines 70-77 create a rudimentary table to be included in the question text containing the variables and their values as well as physical units for this particular question. Lines 80-93 finally assemble the variable/answer block and lines 109-128 put everything together into the dictionary to create a new question.

The script then inserts the question into an existing quiz in an existing course in line 140.

 

After running the script

This screenshot shows the inserted question after running the script, obviously this would need some more cosmetics.

inserted question inside the quiz after executing the script

And when editing the question this is what you see:

editing the question

Be careful not to touch the variables or the formula section since this will reset the table values.

 

Cosmetics

In order to be presentable to the students the above questions needs some cosmetics. What is to be calculated? Perhaps insert a picture or an equation? More text?

after editing, but still inside the editor

After updating the question and leaving the editor it now looks like this in the Canvas UI:

the modified question inside the quiz

 

Seeing and answering the question

When you now start the quiz, this is how the question looks:

the question as it is seen by the student

Summary

  • calculated_questions can be generated using the Python canvasapi library
  • answer values have to be provided with the key 'answer-text'
    'answers': [
       {
         'weight': '100',
         'variables': [
         {'name': 'U0', 'value': '9.0'},
         {'name': 'R1', 'value': '5600.0'},
         {'name': 'R2', 'value': '5600.0'}],
         'answer_text': '4.5'},

     

  • when querying an existing calculated_question through the API the answer values are found with the key 'answer'
    answers=[
        {'weight': 100,
         'variables': [
          {'name': 'U0', 'value': '110.0'},
          {'name': 'R1', 'value': '82.0'},
          {'name': 'R2', 'value': '8200.0'}],
         'answer': 108.91,
         'id': 3863},

     

  • when supplying an equation for the 'formular' field this has to be done in a list, not a dictionary
     'formulas':  ['a*b'],

     

  • when querying an existing calculated_question through the API the equations are found in a dictionary like this:
     formulas=[{'formula': 'a*b'}],

When poking the Canvas API with curl I would often find myself copy and pasting lots of Authorization headers around to correctly authenticate against Canvas. This is error prone and also leaves your tokens accessible in your .bash_history file. To improve on this I wrote a small script for macOS that stores the tokens in Apple's keychain and then automatically adds the Authorization header to the curl request based on the URL in the command line. While this isn't perfect, it's much better and easier to use. The result is a script I call ccurl. 

 

Setup

Download a copy of ccurl, make it executable and set the CCURL_HOSTS environmental variable to a space separated list of hosts you wish to use ccurl against. 

$ curl -s -O https://gist.githubusercontent.com/buckett/ed3217fced4b9d129758157b4476aaa6/raw/1fa77f31bdb65b8bf6cb69a0f82bd29839401691/ccurl$ chmod +x ccurl
$ echo 'export CCURL_HOSTS="canvas.instructure.com canvas.test.instructure.com canvas.beta.instructure.com"' >> ~/.bashrc

You may also wish to put ccurl somewhere on your PATH. Then to set a token for a host use (history -c flushes bash history so the token doesn't get save in plain sight):

$ security add-generic-password -a $USER -s canvas.instructure.com -w 7~1K9WJ3xobQp5RX8DUbbSdigxn2WD8yMOfUlCHbH9FIPlyL7E9E5QWSWN4CCVfqAEHC
$ history -c

Use

Then to use it just do a curl command but add the extra c, it passes all command line options through to curl so it should support all examples you see for the standard curl tool (jq is a tool to transform json, but here it just formats it to make it more readable):

$ ccurl -s  https://canvas.instructure.com/api/v1/users/self | jq .
{
  "id": 4539009,
  "name": "Matthew Buckett",
  "created_at": "2015-05-31T19:49:29+01:00",
  "sortable_name": "Buckett, Matthew",
  "short_name": "Matthew Buckett",
  "avatar_url": "https://canvas.instructure.com/images/messages/avatar-50.png",
  "locale": null,
  "effective_locale": "en-GB",
  "permissions": {
    "can_update_name": true,
    "can_update_avatar": true
  }
}

Links

https://canvas.instructure.com/doc/api/file.oauth.html#manual-token-generation - How to create a token for your account.

Two commonly used grading schemes in Sweden are A-F (i.e., A, B, C, D, E, and F) and P/F (i.e., Pass and Fail) together with Fx (an incomplete - this is not a final grade). To install these grading schemes into a Canvas course or account I wrote a simple python program: insert_AFPFFx_grading_standards.py (available at https://github.com/gqmaguirejr/Canvas-tools).

 

Note that the numeric values associated with the letter grades do not matter, as it is only the letter grade that will be recorded in the central grade register (called Ladok).

 

While we have been using Canvas as an LMS for several years, we still did not have such a grading scale available at the account level, so I made a program to make it easy to insert these schemes for a course.

TLDR;

 I wrote a script that creates a Catalog Session and then downloads all Users' Transcripts. There are some subtle authentication tricks happening so if the code does not make sense refer to the following text. 

 

Why Did I want to download all Catalog User Transcripts?

Recently we have been looking into institution-branded storefront services other than Catalog. In this process, we have had to look into how to download every transcript before possibly sunsetting our Catalog instance. The simplest way to preserve the user data is to download a user's transcript. Unfortunately, the transcript PDFs are not accessible by the API. However, I have found a workaround for this and I wanted to share it with others who are interested in creating a local store of transcript PDFs. 

 

What do I need before I do this?

 You will need to have admin rights to both the Catalog instance and the Canvas instance that is linked to the Catalog instance. 

 

Why Can't I Use My API Token to Authenticate?

To access transcripts seems relatively trivial; one could simply iterate through the Catalog User_Ids and make GET requests to '<CATALOG_DOMAIN>/transcripts/transcript.pdf?user_id=<USER_ID>'. Since this is not accessible with the API, you must simulate a 'login' or create a session ( python documentation ). Without creating a session any requests will be rerouted to the /login page.  This login page contains information that is passed to the login POST request that is hidden to the user. To obtain that info I use the lxml (https://lxml.de/lxmlhtml.html ) package to parse the HTML for these hidden values and then add my username and password to the form before sending it in a POST request to log in.

 

Why Can't I Make a Basic Request Now That I am 'Logged In'?

After simulating a login, I found that making a GET request to /transcripts/transcript.pdf?user_id=<USER_ID> would redirect me to /transcripts/transcript.pdf which is my own transcript. When I looked at the history of the redirect ( see the function history() ) I noticed that the parameter user_id was lost in the first redirect which was to /login?force_login=0&target_uri=%2Ftranscripts%2Ftranscript.pdf . 

However, if I requested the page /login with params force_login=0 and target_uri=

%2Ftranscripts%2Ftranscript.pdf%3Fuser_id%3D<USER_ID> , the request would ultimately redirect to the desired page! I am not fully sure why this worked and if anyone has any idea why I would love to know.

 

 

 

Python Script

  some information has been omitted for privacy and clarity. Please post a comment if there is anything unclear 

# Fill in your details here to be posted to the login form.
username = config.get('catalog','username')
password = config.get('catalog','password')
canvas_catalog_domain = config.get('instance','canvas_catalog')
catalog_domain = config.get('instance', 'catalog')
catalog_headers = {
    'Authorization': 'Token token="%s"' % (config.get('auth_token','catalog')) ,
}
catalog_ids = ## A LIST OF CATALOG USER IDS ##

# Use 'with' to ensure the session context is closed after use.
with requests.Session() as s:
    login = s.get(catalog_domain+'/login/canvas')
    # print the html returned or something more intelligent to see if it's a successful login page.
    #print(login.text)
    login_html = lxml.html.fromstring(login.text)
    hidden_inputs = login_html.xpath(r'//form//input[@type="hidden"]')
    form = {x.attrib["name"]: x.attrib["value"] for x in hidden_inputs}
    #print("form: ",form)

    form['pseudonym_session[unique_id]']= username
    form['pseudonym_session[password]']= password
    response = s.post(catalog_domain+'/login/canvas',data=form)
    #print(response.url, response.status_code) # gets <domain>?login_success=1 200
    #pp(response.headers)
    # An authorised request.
    if int(response.code) != 200:
        raise Exception("Login failed with :", response.code )

    for user_id in catalog_ids:
        #print('user_id: ',user_id)
        # getting transcript pdf
        r = s.get(catalog_domain+'/login?force_login=0&target_uri=%2Ftranscripts%2Ftranscript.pdf%3Fuser_id%3D' + user_id, headers=catalog_headers)
        history(r)
        if int(r.status_code) != 200:
            # possible error
            error_log.write('%s -- %s ERROR \n' % (r.url, r.status_code))
        else:  # lets continue getting info
            filename = 'pdfs/%s_catalog_transcript.pdf' % (user_id)
            with open(filename, 'wb') as f:
                f.write(r.content)


## HELPER FUNCTION TO TRACE THE REQUEST ##
def history(r):
if r.history:
print("Request was redirected")
for resp in r.history:
print('\t',resp.status_code, resp.url)
print ("Final destination:")
print('\t',r.status_code, r.url)
else:
print("Request was not redirected")

 

 

I hope this post helps other Canvas Developers out there! Feel free to contact me if you are trying to troubleshoot running this script or a variation of it. 

 


Maddy Hodges 

Courseware Developer
University of Pennsylvania

In conjunction with experiments concerning using an LTI interface to realize a dynamic quiz, I built a version of Canvas that could run in a Virtual Machine (specifically an image that could be run by VirtualBox).

Advantages of this approach include:

  • easy to try things without any risk to the real Canvas infrastructures
  • easy to give an OVA image of the VM to students

The image was built using an ubuntu version of linux by following the Quick Start instructions at https://github.com/instructure/canvas-lms/wiki/Quick-Start.

Details of the operation of the different containers that comprise this system will be described in a forthcoming Bachelor's thesis.

I should note that when I do the docker-compose to bring up the Canvas instance, it takes a very long time.

In order to do some experiments with the dynamic quiz, I created some fake users and enrolled them in a course. Additionally, since one of the things that I would like the dynamic quiz to exploit is knowledge of what program of study these users are in I augments the user's custom data with details of what program they are in.

The initial version of the program (create-fake-users-in-course.py) can be found at  https://github.com/gqmaguirejr/Canvas-tools .

The result is a course with a set of fake users as can be seen in the list of user's in this course:

List of users in the course

An example of fake program data is:

custom data for user Ann FakeStudent is {'data': {'programs': [{'code': 'CINTE', 'name': 'Degree Programme in Information and Communication Technology', 'start': 2016}]}}

 

Some further details about the above can be found at Creating fake users and enrolling them into a course: Chip sandbox 

Mike Cowen

Templating Courses Tool

Posted by Mike Cowen Employee Nov 13, 2018

Templates are Awesome!  They help create a solid launching point for creating quality courses.

There are a couple of different ways to put a template in a new course shell.

  1. Importing a template course from Commons
  2. Using the Blueprint feature
  3. Copying another course which you have designated as a 'Template'.

 

Depending on the needs and workflow of your institution any one of these can work very well. This blog will focus on the third method listed "Copying another course which you have designated as a Template."

 

Why:

As a Remote Canvas Admin, I have been asked to apply a course template to all newly created courses for an institution that I serve. This was a daily task, so I decided to automate it using Google Sheets / App Script.  Word got out around the office about my tool and I've had a couple of requests for it, so I decided to make it generic for any institution and share it here in the Community. I chose Google Sheets because it is easily shareable with anyone, uses mostly JavaScript and offers a UI that makes manipulating data more human readable.

 

What:

This tool is a Google Sheet that uses the App Script language which is mostly JavaScript.  Once it is configured, just click the 'Canvas' menu, then 'Apply Template to Unchanged & New Courses' and all courses that are newly created will be updated with your template course. BAM!

The logic of the tool is essentially this:

  1. Configure your domain and token.
  2. Tell the script the canvas course ID of the most recently created course. Any courses that are created after this one will have the template applied to it, with one exception.
    1.  Exception: The script will check each course to look for existing content.  If content exists in the course, the template will not be applied. There were a couple of requests for this feature. If you prefer not to use it, you can remove the
      if (getCourseChanges(check_course_id) == 0)
      from within the processCSV function.
  3. Tell the script what is the course that you want to use as a template
  4. The script then creates and downloads a courses.csv provisioning report and compares each course to see if it is newer than the  course mentioned in step 2. If a course is newer, then apply the template course using a content_migration API endpoint.

 

For a better user experience, the script then gets the status of the migration and updates a cell when the migration is complete. I also made the name of each course a link to the course to make it easy to verify content.  

 

How to Use the Tool:

These instructions are in the 'Instructions' tab in the spreadsheet.

Configuration Instructions - This only needs to be done once

  1. In the 'Canvas Menu' select 'Configure API Settings'
  2. When prompted, Authorize the script by clicking on 'Authorize.'
  3. Select an email address to run the script.
  4. Select Allow
  5. In the 'Canvas Hostname' box, enter the URL of your Canvas instance. For example myschool.instructure.com
  6. In the 'Access Token' box, enter your token. (For help creating the token, click on the ' How do I obtain an API access token for an account?' link right below the box.)
  7. Click 'Submit'
  8. If the Notice box says 'Using <YOURSCHOOL.instructure.com> for your Canvas Hostname.' then click OK. Otherwise click 'Cancel' and repeat steps 5-7.
  9. In the 'Canvas' Menu select 'Don't Apply Template to Courses Created Before This Course ID'. Enter the Canvas course ID of the most recently created course in your instance and click OK. If you want to look at a list of all your courses, select 'Show All My Courses' from the 'Canvas' menu and wait a few minutes for the courses.csv report to create & populate the 'CurrentCourses' tab.
  10. In the 'Canvas' Menu select 'Set Template Course' and enter the Canvas course ID of your template course and click OK. 

 

Applying the Template Course

  1. In the Canvas menu, select 'Apply Template to Unchanged & New Courses'
    1. The Google sheet will now get a Courses Provisioning report and work it's magic. In a few minutes, you will see list of your courses in the 'Current Courses' tab.
    2. After the courses display on the CurrentCourses Tab, the new courses will appear on the NewCourses tab and automatically apply the template to the new courses.
    3. After another minute or so, the 'Status' and date/time' columns will update when the template course is applied
    4. There are links on the courses for your convenience
    5. Note: Google Scripts have a time limit. If you have a larger institution with hundreds of thousands of courses, this script may time-out. If it does, you can re-start the 'status update' by selecting 'Update Status of Template Copying' from the Canvas menu.
  2. If you wish to automate this and run it on a daily timer, select Tools > Script Editor > Edit > Current project's triggers. Then create a trigger to run updateNewCourses'.

 

Cautions/Speedbumps

As mentioned above Google App Script have limitations. Depending on the size of your institution, the script may time out. If that happens you're either a HUGE institution or making a LOT of changes. You may want to remove the 'if (getCourseChanges(check_course_id) == 0)' as mentioned above.

Just as a point of reference, I have worked with one institution that has almost 600K courses. That report takes about 12 minutes to create, so I estimate around 50K courses per minute to create the users.csv file. Your milage may vary.

The time required to apply the template course will also vary based on the quantity of material in the template.

 

What's Next:

  • Frankly, I'm not thrilled with the getMigrationStatus() function. If you feel the desire to help, please see the comments in the code.
  • I know my code isn't always optimized as good as it could be. If/when you find ways to improve it, please let me know. I love to learn and appreciate any help.
  • If you add more feature & functionality to this, please share back to the community. 

 

Credits:

James Jones - The Configure API Settings and Forget API Settings come straight from his super work. If you haven't seen his Canvancements, prepare to be amazed!

Jeremy Perkins - Thanks for showing me how to pull reports and put them into sheets! 

 

Link to Template

I almost forgot... Here is a link to the template

You'll need to make a copy of it in your own drive first.

 

Video Demo

CHANGELOG

Sep 19, 2018 > Sep 21, 2018 12:28 PM

Jun 6, 2018

May 31, 2018

 

README

If you're a system administrator of a Canvas LMS instance with a deep organization of sub accounts you have inevitably found yourself in one of Dante's 9 Circles of Hell, repetitively clicking Sub Accounts at each and every level as you drill down to where you need to go. Here at CCSD we have over 8,000 sub accounts, and yet this only affects 5 people. So we will share this to help anyone else who is trapped.

 

The JavaScript and CSS files here add a directory style recursive HTML menu to the Canvas global navigation admin tray...

 

In Pictures

CollapsedExpandedSearch
Admin Tray CollapsedAdmin Tray ExpandedAdmin Tray Search

Configuration & Setup

Zero to little configuration is needed for admintray-subaccmenu.js, so I won't even bother explaining subacctray.cfg

 

Option 1

  1. Host the file admintray-subaccmenu.js on a secure server (https) or somewhere like Amazon S3
  2. Copy/append the contents of admintray-subaccmenu.inc.js to your Canvas Theme JavaScript file, point the URL to the hosted file.

Option 2 (Quick Start)

  1. Just copy the example snippet below that uses this repo's version and RawGit   to your Canvas Theme JavaScript file. However, please note that the RawGit URL points to the repo source and any changes to it may affect your usage later. I recommend hosting your own for stability, but this can get you started.

 

Once loaded the script will initialize an API call (to /api/v1/accounts/self/sub_accounts) and loop through x/100, collecting every sub account in your instance, compile a recursive HTML unordered list and store it your browser's localStorage.

Depending on the number of your sub accounts and internet speed, this will take a moment, be patient for the first load, open the tray and wait for it to show up. This takes us about 45-60 seconds on production. You will see a loading indicator in the Admin Tray while it compiles.

 

Features & Use

 

Instance Independent

There are some users (like Instructure Canvas Remote Admins) who may login to multiple Canvas Instances/institutions, so each menu will be stored in localStorage based on the uniqueness of x.instructure.com, including x.beta.instructure.com, x.test.instructure.com, and xyz.instructure.com.

For most users and institutions this will go mostly unnoticed unless you have different sub accounts between your Production, Test and Beta environments. I made this update to help those users it will affect. I also personally hate copying, pasting or replicating files just to change 1 line. This update allows 1 file to be hosted on a CDN like Amazon S3 and simply change the var show_results_parent value in admintray-subaccmenu.inc.js or leave it blank.

Therefore an already minified file has been provided in the repo.

 

Alphabetical Sorting

Yup! As far as I know, the Canvas API does not allow sorting API calls alphabetically during pagination. The entire stack of sub accounts is sorted prior to building the menu.

Localized Search

Using JavaScript/jQuery to search within the stored HTML, preventing further API calls. You can search the entire sub account menu by name.

Search Result - Prefix Parent Account (Skip-to-Depth Display)

I don't know what else to call it, so here is an explanation.

Suppose your directory structure looks something like the following, where (#) is the depth of the account.

  • High Schools (1)
    • George Washington HS (2)
      • Course Shells (3)
      • SIS Courses (3)
        • English (4)
        • Math (4)
        • Science (4)
  • Middle Schools (1)
    • Betsy Ross MS (2)
      • Course Shells (3)
      • SIS Courses (3)
        • English (4)
        • Math (4)
        • Science (4)

 

When we search for Science and get multiple sub accounts of 'Science', we can't identify which one we want to choose.

  1. Science
  2. Science

So if we map the results depth of 4 to it's parent depth at 2 (instead of 3, SIS Courses) we can get a result like:

  1. George Washington HS > Science
  2. Betsy Ross MS > Science

 

And both are links to their respective account.

 

Examples

// one skip
show_results_parent = { 4:2 }
// expected result:
// George Washington HS > Science

// or multiple skips, must be unique
show_results_parent = { 4:2, 3:2 }
// expected results:
// (4:2) George Washington HS > Science
// (3:2) George Washington HS > SIS Courses

Note: If you don't define the skip, it will not display a parent

Reload

The ↻ button at the bottom right of the menu will clear the menu from localStorage and recompile it for the current Canvas instance. Use this when you or other admins have added or removed sub accounts.

 

User Impact

As mentioned above, CCSD has 5 system admins, with 40,000 Employees and over 300,000 Students. Be kind to your users, use the snippet below to reduce the impact of your admin-only tools. This is included in the repo as admintray-subaccmenu.inc.js

if (ccsd.util.hasAnyRole('admin','root_admin')) {
// used for search results, result:parent, see documentation for more details
var show_results_parent = {}
// async load the sub account menu script
$.ajax({
url: 'https://cdn.rawgit.com/robert-carroll/ccsd-canvas/master/admintray-subaccmenu/admintray-subaccmenu.min.js',
dataType: 'script',
cache: true,
data: {skipd: JSON.stringify(show_results_parent)}
})
}

 

User Script

At the suggestion and some coaching by James Jones, I have created a User Script version. This is useful for those admins that can't or don't want to install this script in their Canvas Theme/Global JavaScript. This script is identical to the global JavaScript file, except that the CSS will be added to the DOM by the script and has the various userscript requirements at the top for Tampermonkey.

  1. You will need to install and enable the Tampermonkey browser extension
  2. Install the UserScript version of admintray-subaccmenu.user.js

 

Known Issues

If you are not a root admin of your Canvas Instance, you will need to set (1) sub account in the root setting. At this time you cannot add multiple sub accounts. I am planning to fix this.

 

Code repo here...

ccsd-canvas/admintray-subaccmenu at master · robert-carroll/ccsd-canvas · GitHub

 

 

Credits for Blog Post Banner Image

Public domain

Chart of Hell
File:Sandro Botticelli - La Carte de l'Enfer.jpg - Wikimedia Commons
 

Botticelli : de Laurent le Magnifique à Savonarole : catalogue de l'exposition à Paris, Musée du Luxembourg, du 1er octobre 2003 au 22 février 2004 et à Florence, Palazzo Strozzi, du 10 mars au 11 juillet 2004. Milan : Skira editore, Paris : Musée du Luxembourg, 2003. ISBN 9788884915641

 

{{PD-US}}
This media file is in the public domain in the United States. This applies to U.S. works where the copyright has expired, often because its first publication occurred prior to January 1, 1923. See this page for further explanation.