Skip navigation
All Places > Canvas Developers > Blog
1 2 3 Previous Next

Canvas Developers

72 posts

This blog describes how to move user enrollments from one role to another using a Python class, SQL data, and a mapping file.


So here is the situation we are presently facing at Everett Public Schools.  Along with our base roles of Student, Teacher, Designer, etc., we also have custom roles that have been derived from those base roles.  These custom roles are a bit more refined and help keep users and there permissions in check.  The problem with this idea is that not everyone follows the rules when assigning a role to a user when that user is enrolled into a course.  This quickly becomes an issue when trying to search and sort users based upon their permissions.


Case in point: We have teachers that are enrolled as students in staff courses or portals that are located at their respective school or sub-account.  So are they truly a student in the classic sense?  No.  When you do a blind search for students, you get back a bunch of teachers and maybe a few other users that somebody down the line added to a course as a student.  Now that the user data set has gotten out of hand, how do you move those enrollments over to the new custom role that you just created?  In addition to that, how do you keep it all in sync?


The solution comes in a few simple steps which you can follow below.  First, you need to decide what data set of users need to be moved from one role to another?  In our case, we wanted non-students (i.e. district staff) that were currently assigned the base role of StudentEnrollment (aka Student).  These district IDs are the same as their login id and SIS id too, so it keeps things straight.  Since we run multiple nightly integrations, we simply just created a new section in our SQL code to only pull the district staff IDs.  Like this:

IF @type = 'STAFF_USERS'
SELECT login_id
FROM eps_canvas.dbo.users
WHERE user_type = 'F';

Just a bit of a backstory to explain the logic.  In Everett we use several nightly imports into Canvas to roster courses, control users, etc.  More on that in another blog, but to suffice it to say it works very well.  We use a 'users' table in a smaller database to control who gets put into Canvas.  The user_type of 'F' is for 'faculty'.  So when this script runs, it uses the 'staff_users' input parameter to control what data set the script will receive.  This logic comes from the script configuration .ini file:



#API SIS upload URL for the site
#Root account should always be 1

#The URL string data that allows acting as another user
#The 'replace' placeholder gets replaced with the correct term in the script
masqueradeData: {"as_user_id": "replace"}

#The list of parameters to pull from the DB
#Use this list to effect the role mappping below
#Comma delimited, any order
dbParams: staff_users

#Text of the SQL Server stored procedure SQL
#For getting of district ids
dbSQL: exec eps_internal.dbo.pyCurGetCanvasCustomExtracts ?

#The endpoint to get enrollments for a user
enrollmentsEndpoint: users/self/enrollments

#The endpoint to enroll the user in the course
coursesEndpoint: courses/{}/enrollments

#The endpoint to get all of the current roles
rolesEndpoint: accounts/1/roles

#The mapping from one role to another for each DB parameter
#The key for each map is keyed off of the dbParams list
#The JSON object for each dbParam is a key of the permission type to find, the value is the role to assign
#All values are case sensitive and must match exactly to what is in Canvas
roleMapping: {"staff_users": {"StudentEnrollment": "Adult Learner"}}

When the script is executed, it looks for an associated configuration file and reads in the [Default] section data.  It does read a master configuration file too so it can set some global variables, but that is outside the scope of this post.  Each parameter is then assigned to an internal variable that the script uses to do its thing.  Jumping down to the bottom line in the file, the roleMapping dictionary is keyed to the dbParams value.  This is how the data set knows what users to process, what role to look for (in this case 'StudentEnrollment') and what role to use when enrolling the user into the current course ('Adult Learner').  If we wanted to process more users this this script workflow, then we add a value to the dbParams list and add the same value to the roleMapping dictionary along with the roles to use.  


At some point, we needed to create our 'Adult Learner' role.  We wanted a role that was student based but that could be used for staff members that are fulfilling some student role in a course somewhere.  We wanted the student role to truly reflect actual students in the district.


So now we are ready to roll.  Consider this Python class:


from requests import Session
from classEpsDB import EpsDB
from classEpsException import EpsException
from classEpsConfiguration import EpsConfiguration
from json import loads
from urllib import parse

class EpsITSyncCanvasEnrollments(object):
    Syncs the Canvas enrollments between what was assigned to a user and what should be the correct assignment.
    We do this to keep users from getting the incorrect enrollment and streamlining the search process.
    @package: epsIT
    @copyright: 2020, Everett Public Schools
    @author: DPassey
    @version: 1.0, 02.24.2020

    def __init__(self, user_id_type='sis_user_id'):
        Class initializer.
        Parses the config file_name, assigning values as needed.
        @raise exception: EpsException
            cfg = EpsConfiguration(f"{self.__class__.__name__}.ini")
            self.rc = 0
            if not cfg.db_dsn: raise Exception(f"{self.__class__.__name__}.__init__. DSN data source is missing.")
            for k in cfg.locals:
                k = k.upper().strip()
                v = cfg.locals[k].strip()
                if k == 'DBSQL': db_sql = v
                if k == 'DBPARAMS': param_list = v.split(',')
                if k == 'ROOTURL': root_url = v
                if k == 'MASQUERADEDATA': masquerade = v
                if k == 'ENROLLMENTSENDPOINT': enroll_endpoint = v
                if k == 'COURSESENDPOINT': course_endpoint = v
                if k == 'ROLEMAPPING': roles_map = loads(v)
                if k == 'ROLESENDPOINT': roles_endpoint = v

            # set the session header
            self.header = {'Authorization': f'Bearer {cfg.canvas_token}'}

            # must be one of these
            if user_id_type not in ('sis_user_id', 'sis_login_id'): raise Exception(f'{self.__class__.__name__}.__init__. Invalid parameter: {user_id_type}.')

            # create a session
            with Session() as self.session:
                # get the type of user from the parameter list
                for _ in param_list:
                    # get all of the active roles
                    url = f"{root_url}{roles_endpoint}"
                    # for each mapped role for this parameter, get the role's id
                    roles_dict = self.get_account_roles(url, roles_map[_])
                    # get the data to process for each parameter
                    data = self.get_data(cfg.db_dsn, db_sql, _)
                    # proceed if we get user data
                    if data:
                        # for each user in the data, find the applicable enrollments to move
                        for user in data:
                            # set up masquerading
                            self.data_dict = loads(masquerade.replace('replace', "{}:{}".format(user_id_type, user[0])))
                            # get all of the user's enrollments to see if we need to change enrollments
                            user_dict = self.get_enrollments(f"{root_url}{enroll_endpoint}", roles_map[_])
                            # now process the users by their Canvas id
                            for user_id in user_dict:
                                # process each course and re-enroll the user
                                # we need to keep the indexing linked between course and enrollment
                                for c, course in enumerate(user_dict[user_id]['courses']):
                                    # get the role id of the new role
                                    # need this to move enrollments
                                    role_id = roles_dict[user_dict[user_id]['roles'][c]]
                                    # get the current enrollment id
                                    enroll_id = user_dict[user_id]['enrollments'][c]
                                    endpoint = course_endpoint.format(course)
                                    # now set the new enrollments
                                    self.set_enrollment(f"{root_url}{endpoint}", user_id, role_id, enroll_id)

    def get_data(self, dsn, sql, param):
        Executes the stored procedure and gets the applicable data set.
        @param dsn: String
        @param sql: String
        @param param: String
        @return: List
        @raise exception: EpsException
            db = EpsDB(dsn)
            if not db: raise Exception(f"{self.__class__.__name__}.get_data. Could not connect to database.")
            rs = db.get(sql, param)
            if not rs: raise Exception(f"{self.__class__.__name__}.get_data. No data set returned.")
            return rs

    def get_account_roles(self, url, role_dict):
        Gets the active roles and puts them in a roles dictionary.
        @param url: String
        @param role_dict: Dictionary
        @return Dictionary
        @raise exception: EpsException
            role_id_dict = {}
            # get all active roles
            data_dict = {'state[]': 'active', 'per_page': 100}
            resp = self.session.get(url, data=data_dict, headers=self.header)
            if resp.status_code == 200:
                # check the headers "link" attribute for the last relational link
                for link in resp.headers['Link'].split(','):
                    if 'rel=last' in link.replace('"','').replace("'",'').lower():
                        # grab the total pages count by parsing out the url parts and convert to int
                        page_total = int(parse.parse_qs(parse.urlparse(link.split(';')[0])[4])['page'][0])
                        # we need to get all results since we are being paginated
                        # these sections perform the same logic, just easier to to write it this way
                        if page_total > 1:
                            p = 1
                            while p <= page_total:
                                data_dict.update({'page': p})
                                resp = self.session.get(url, data=data_dict, headers=self.header)
                                json = loads(resp.text)
                                for _ in json:
                                    if _['role'] in role_dict.values(): role_id_dict[_['role']] = _['id']
                                p += 1
                            json = loads(resp.text)
                            for _ in json:
                                if _['role'] in role_dict.values(): role_id_dict[_['role']] = _['id']
            else: raise Exception(f"{self.__class__.__name__}.get_account_roles. Response {resp.text} returned.")
            return role_id_dict

    def get_enrollments(self, url, map_dict):
        Gets the roles for the user and place in a user dictionary.
        @param url: String
        @param map_dict: Dictionary
        @return Dictionary
        @raise exception: EpsException
            user_list = []
            enrollments_list = []
            roles_list = []
            user_dict = {}
            # make a copy of the class data dictionary so we can update it
            data_dict = self.data_dict.copy()
            # we should never exceed the per_page value
            # i mean really....over 100 enrollments?
            # current_and_future is a special state for all courses, published and unpublished
            data_dict.update({'state[]': 'current_and_future', 'per_page': 100})
            resp = self.session.get(url, data=data_dict, headers=self.header)
            if resp.status_code == 200:
                json = loads(resp.text)
                for _ in json:
                    # check if user is enrolled in the course per the map_dict keys
                    if _['role'] in map_dict:
                        user_id, course_id, enroll_id = [_['user_id'], _['course_id'], _['id']]
                # build the user enrollment dictionary for those mapped roles
                if user_list: user_dict = {user_id: {"courses": user_list, "enrollments": enrollments_list, "roles": roles_list}}
            else: raise Exception(f"{self.__class__.__name__}.get_enrollments. Response {resp.text} returned.")
            return user_dict

    def set_enrollment(self, url, user_id, role_id, enroll_id):
        Sets the user enrollment for the course by deleting the original enrollment, making a new one.
        @param url: String
        @param user_id: Int
        @param role_id: Int
        @param enroll_id: Int
        @raise exception: EpsException
            # now we enroll the user in the proper role
            # we keep the enrollment type blank so the role id will override the base enrollment
            data = {"enrollment[user_id]": user_id, "enrollment[type]": '', "enrollment[role_id]": role_id, "enrollment[enrollment_state]": "active"}
            resp =, data=data, headers=self.header)
            if resp.status_code == 200:
                # do not change the url as we want to delete the old enrollment now
                resp = self.session.delete(f"{url}/{enroll_id}", data={"task": "delete"}, headers=self.header)
                if resp.status_code == 200: self.rc += 1
                else: raise Exception(f"{self.__class__.__name__}.set_enrollment. Response {resp.text} returned.")
            else: raise Exception(f"{self.__class__.__name__}.set_enrollment. Response {resp.text} returned.")

# end of class
x = EpsITSyncCanvasEnrollments()

This is the flow:

  1. Read in the configuration .ini files, one that is global (the EpsConfiguration class) and one that named the same as this class
  2. Assign the configuration values to class values
  3. Query the database for the data set of user login ids
  4. Get a data set of all of the roles that currently exists in our Canvas instance
  5. For each user, act as that user and get all of the current and future enrollments
  6. Using the mapping dictionary, find each enrollment that we need to change and get the role id value from the list of roles that were grabbed earlier
  7. For each enrollment that is applicable for the user, enroll the user in the new role for the course and set it to active and then delete the old enrollment


And there you go.  You have moved all of your applicable enrollments over to the new one without having to do it manually.  Setting this script up as a regular job, depending on your needs of course, will ensure that your Canvas user role assignments don't get out of control.

I find the current system of emails of newly submitted assignments to be almost worthless, as I am in a number of courses where there are large numbers of students and most of them are irrelevant from my point of view as a teacher. In these courses, sections have been created to make it easy for a teacher to view the subset of students that is actually relevant to the teacher. However, since I have a large number of such courses (i.e., more than a dozen) and students are submitting material at their own pace through these courses, it is difficult to find the wheat among the chaff of notices about submissions for each of these courses.


This motivated the design of a program to get information about just the assignment submissions that I am interested in. Of course one can easily get a list of all the courses that a user is in, but how can you know what sections within these courses a user is interested in?  The answer is to ask the user to provide this information!


The result is two programs:


The first program creates a JSON formatted file with a course_info dictionary of the form:

{"courses_to_ignore": dict_of_courses_to_ignore,

"courses_without_specific_sections": dict_of_courses_without_specific_sections,

"courses_with_sections": dict_of_courses_with_sections



courses_to_ignore are courses that the user wants to ignore
courses_without_specific_sections are courses where the user is responsible for all the students in the course
courses_with_sections are courses where the user has a specific section - the specific section's name may be the user's name (in Canvas) or some other unique string (such as "Chip's section"). Because the name of the relevant section can be arbitrary, this file is necessary to know which section belongs to a given user.


The second program reads the information from the JSON file and then prunes out the courses_to_ignore from the list of a user's courses and then uses the information from courses_without_specific_sections and courses_with_sections to iterate through the courses and looks for ungraded assignments and then for each of the relevant students (in a course or section) looks for an ungraded assignment. Currently, the program just outputs information about these assignments.


To set up the JSON file is easy, you simply run the first program and then move entries from the courses_with_sections dict to one of the other dicts (removing unnecessary or irrelevant sections as you go). You can fun the first program in update mode (with the -U flag) to add more courses - it remembers the courses you have set to be ignored and the ones you have responsibility for all the students.


The programs can be found at GitHub - gqmaguirejr/Canvas-tools: Some tools for use with the Canvas LMS. 


Of course, I discovered an assignment that had been submitted that I had not seen, so on to grading it!

For some time I have been running a local Canvas instance for development activities. This has enabled me to both peek under the covers and give a VM with a complete Canvas instance and programs that I have developed to students.


During the summer I noticed that after updating the code using the github Canvas sources that I had a flashing dashboard that would never render a static dashboard and that when I went to the assignments page I could not see the list of assignments.

When using the inspector in the browser I could see the results of the query return the JSON for the assignments in the course. However, nothing appeared.

After some looking at the page for assignments, I found that the class where I expected to see the assignments there was a div that included "hide-content-while-scripts-not-loaded" and then searching in the source code (using find) I found the following:

find . -type f -exec grep hide-content-while-scripts-not-loaded {} \; -print   @body_classes << 'hide-content-while-scripts-not-loaded' ./app/views/assignments/new_index.html.erb       @body_classes << 'hide-content-while-scripts-not-loaded' ./app/views/courses/show.html.erb   @body_classes << 'hide-content-while-scripts-not-loaded right-side-optional' ./app/views/announcements/index.html.erb   @body_classes << 'hide-content-while-scripts-not-loaded' ./app/views/discussion_topics/index.html.erb   @body_classes << "full-width no-page-block hide-content-while-scripts-not-loaded" ./app/views/calendars/show.html.erb

So this hiding of contents occurs in a number of places, but I could not find the CSS.
After a bit of searching, I found at

// This hides stuff till the javascript has done it's stuff .hide-content-while-scripts-not-loaded   #content, #right-side-wrapper     +single-transition(opacity, 0.3s)     +opacity(1) .scripts-not-loaded   #content, #right-side-wrapper     +opacity(0)

The above means that the results are purposely hidden until some javascript has been loaded.

Additionally, using the inspector in the brower I saw the following when trying to display the page for assignments for a course:

assignment_index.js:14 Uncaught (in promise) Error: Cannot find module '@instructure/js-utils'     at webpackMissingModule (assignment_index.js:14)     at eval (assignment_index.js:14)     at Module.sMe2 (assignment_index-c-9c2eac0849.js:1941)     at __webpack_require__ (main-e-a68344b004.js:64)

Going to the docker container where the webpack is built I did a yarn run webpack. In this I found:

ERROR in ./app/jsx/bundles/dashboard_card.js Module not found: Error: Can't resolve '@instructure/js-utils' in '/usr/src/app/app/jsx/bundles'  @ ./app/jsx/bundles/dashboard_card.js 22:0-65 40:33-39 40:40-56  @ ./node_modules/bundles-generated.js  @ ./app/jsx/main.js  ERROR in ./app/jsx/bundles/assignment_index.js Module not found: Error: Can't resolve '@instructure/js-utils' in '/usr/src/app/app/jsx/bundles'  @ ./app/jsx/bundles/assignment_index.js 29:0-57 91:0-16  @ ./node_modules/bundles-generated.js  @ ./app/jsx/main.js  ERROR in ./app/jsx/dashboard/DashboardHeader.js Module not found: Error: Can't resolve '@instructure/js-utils' in '/usr/src/app/app/jsx/dashboard'  @ ./app/jsx/dashboard/DashboardHeader.js 37:0-65 283:27-33 283:34-50  @ ./app/jsx/bundles/dashboard.js  @ ./node_modules/bundles-generated.js  @ ./app/jsx/main.js  ERROR in ./app/jsx/discussions/apiClient.js Module not found: Error: Can't resolve '@instructure/js-utils' in '/usr/src/app/app/jsx/discussions'  @ ./app/jsx/discussions/apiClient.js 19:0-66 28:9-16 28:17-33  @ ./app/jsx/discussions/actions.js  @ ./app/jsx/discussions/components/DiscussionsIndex.js  @ ./app/jsx/discussions/index.js  @ ./app/jsx/bundles/discussion_topics_index_v2.js  @ ./node_modules/bundles-generated.js  @ ./app/jsx/main.js

The above means that the js-utils are not found, despite the fact that this is a package as one can see from the output of the command "ls packages":

babel-preset-pretranslated-format-message canvas-planner canvas-rce canvas-supported-browsers jest-moxios-utils js-utils k5uploader old-copy-of-react-14-that-is-just-here-so-if-analytics-is-checked-out-it-doesnt-change-yarn.lock

Similar to the solution in posting (Links to an external site.) 

The solution is to add in the docker-compose.override.yml file the following to the services -> jobs -> volumes key :
- js-utils:/usr/src/app/packages/js-utils

and then to the volumes key father down the file add this::
js-utils: {}

This fixes the problems with dashboard and assignments!

I also notice that another module ('canvas-planner') that is in packages also has problems during the yarn run webpack:

ERROR in ./packages/canvas-planner/lib/actions/index.js Module not found: Error: Can't resolve '@instructure/js-utils' in '/usr/src/app/packages/canvas-planner/lib/actions'  @ ./packages/canvas-planner/lib/actions/index.js 22:0-66 101:18-25 101:26-42  @ ./packages/canvas-planner/lib/index.js  @ ./app/jsx/dashboard/DashboardHeader.js  @ ./app/jsx/bundles/dashboard.js  @ ./node_modules/bundles-generated.js  @ ./app/jsx/main.js  ERROR in ./packages/canvas-planner/lib/actions/loading-actions.js Module not found: Error: Can't resolve '@instructure/js-utils' in '/usr/src/app/packages/canvas-planner/lib/actions'  @ ./packages/canvas-planner/lib/actions/loading-actions.js 24:0-66 82:18-25 82:26-42 158:16-23 158:24-40  @ ./packages/canvas-planner/lib/actions/index.js  @ ./packages/canvas-planner/lib/index.js  @ ./app/jsx/dashboard/DashboardHeader.js  @ ./app/jsx/bundles/dashboard.js  @ ./node_modules/bundles-generated.js  @ ./app/jsx/main.js

My hypothesis is that a similar approach can be used to solve this problem. However, since the output of the yarn run webpack also show the following (edited to reduce the mass of output)

ERROR in ./packages/canvas-planner/lib/actions/loading-actions.js Module not found: Error: Can't resolve '@instructure/js-utils' in '/usr/src/app/packages/canvas-planner/lib/actions'  @ ./packages/canvas-planner/lib/actions/loading-actions.js 24:0-66 82:18-25 82:26-42 158:16-23 158:24-40  @ ./packages/canvas-planner/lib/actions/index.js  @ ./packages/canvas-planner/lib/index.js  @ ./app/jsx/dashboard/DashboardHeader.js  @ ./app/jsx/bundles/dashboard.js  @ ./node_modules/bundles-generated.js  @ ./app/jsx/main.js  ERROR in ./app/coffeescripts/media_comments/js_uploader.js Module not found: Error: Can't resolve '@instructure/k5uploader' in '/usr/src/app/app/coffeescripts/media_comments'  @ ./app/coffeescripts/media_comments/js_uploader.js 21:0-49 106:26-36 123:26-36  @ ./public/javascripts/media_comments.js  @ ./app/jsx/runOnEveryPageButDontBlockAnythingElse.js  @ ./app/jsx/main.js  ERROR in ./packages/canvas-rce/lib/bridge/Bridge.js Module not found: Error: Can't resolve '@instructure/k5uploader' in '/usr/src/app/packages/canvas-rce/lib/bridge'  @ ./packages/canvas-rce/lib/bridge/Bridge.js 21:0-49 69:38-48 ...  ERROR in ./packages/canvas-rce/lib/rce/ResizeHandle.js Module not found: Error: Can't resolve 'react-draggable' in '/usr/src/app/packages/canvas-rce/lib/rce'  @ ./packages/canvas-rce/lib/rce/ResizeHandle.js 22:0-48 65:27-40 ...    ModuleDependencyWarning: "export 'passthroughProps' was not found in '@instructure/ui-react-utils' ... ,   ModuleDependencyWarning: "export 'passthroughProps' was not found in '@instructure/ui-react-utils' ...  ]  98% after emitting SizeLimitsPlugin[ ModuleDependencyWarning: "export 'addInputModeListener' was not found in '@instructure/ui-dom-utils' ...,   ModuleDependencyWarning: "export 'passthroughProps' was not found in '@instructure/ui-react-utils' ...,   ModuleDependencyWarning: "export 'passthroughProps' was not found in '@instructure/ui-react-utils' ...

It makes me curious as to why all of these missing files include the path "@instructure". Is there some error in the configuration that leads to the packages not being found (despite the fact that doing a "yarn list" showed that "@instructure/js-utils" was installed)?.


I should note that I am a novice with respect to Javascript - so some of the problems might be operator error, but the Canvas source code was freshly installed via the quick start update script.

We've been working for a while on leveraging the Canvas API to work with other systems for particular learning use cases. We're developing a middleware app using ASP.NET Core MVC to manage the integrations.


We've been using the access tokens that each Canvas user can generate to work with the API. This is fine for development and testing but when we need to extend usage we want to avoid requesting users create their own tokens. A neater solution is to authenticate directly into Canvas using OAuth and, from this, get a token for the logged in user that can be used for subsequent API calls. This maintains the context based security that is a key feature of the access token.


Before I get into the steps to to getting OAuth to work in ASP.NET Core MVC and the intricacies of connecting to Canvas I'll give you a link to a GitHub repo that contains a very simple example. This is not production code and is an example only.


I also want to acknowledge the series of posts by Garth Egbert on the OAuth workflow in .NET. I wouldn't be writing this now if it wasn't for Garth. I also got a lot of help from this post by Jerrie Pelser that works through an example of using OAuth2 to authenticate an ASP.NET Core App with Github.


Getting Started

In this example I'm using a local instance of Canvas running as a Docker container. If you want to follow along then install Docker Desktop. Then download and run lbjay's canvas-docker container. This container is designed for testing LTIs and other integrations locally and comes with default developer keys:

  • developer key: test_developer_key
  • access token: canvas-docker


You can also log in to the Canvas instance and add your own developer keys if you want to.


Other thing that you'll need to started is an IDE of your choice. I'll be using Visual Studio 2019 Community edition but you could use Visual Studio Code or another tool that you prefer.


Step 1 - Make sure that the test version of Canvas is running

Start Docker Desktop and load the canvas-docker container. Once it has initialised it is available at http://localhost:3000/ 


The admin user/pass login is / canvas-docker.


Step 2 - Create a new ASP.NET MVC Core 2.2 application

Start Visual Studio 2019 and select Create a new project.


Visual Studio Start Screen

Select ASP.NET Core Web Application.

Visual Studio Project type screen

Set the Project name.

Visual Studio Project Name

In this case we're using an MVC application so set the type to Web Application (Model-View-Controller). Make sure that ASP.NET Core 2.2 is selected and use No Authentication as we're going to use Canvas.

Visual Studio project sub type


Step 3 - Let's write some code

 OAuth requires a shared client id and secret that exists in Canvas and can be used by an external app seeking authentication. The canvas-docker container has a developer key already in it but you can add your own. 


The default key credentials are:

Client Id: 10000000000001

Client Secret: test_developer_key


You can get to the developer keys by logging in to your local instance of Canvas and going to Admin > Site Admin > Developer Keys.


Now we need to store these credentials in our web app. For this example we'll put them in the appsettings.json file. You can see the code that we've added in the image below. Please note that in proper development and production instances these credentials should be stored elsewhere. Best practice for doing this is described here: Safe storage of app secrets during development in ASP.NET Core.


app settings json

In this case Canvas is the name of the authentication scheme that we are using.


Now the configuration for OAuth2 happens mostly in the startup.cs file. This class runs when the app is first initialised. Within this class is public void method called ConfigureServices in which we can add various services to the application through dependency injection. The highlighted zone in the image below shows how to add an authentication service and configure it to use OAuth.


Startup config

The basic process is to use services.AddAuthentication and then set a series of options. Firstly we set the options to make sure the DefaultAuthenticationScheme is set to use Cookies and the DefaultSigninScheme is also set to use cookies. We set the DefaultChallengeScheme to use the Canvas settings from the appsettings.json file.


We can chain onto that a call to AddCookie(). And then chain onto that the actual OAuth settings. As you can see we set "Canvas" as the schema and then set options. The options for ClientId and ClientSecret are self explanatory. The CallBackPath option needs to set to be the same as that in the Redirect URI in the key settings in Canvas. You may need to edit the settings in Canvas so they match. The image below shows where this is located.


Callback URI


The three end points are obviously critical. The AuthorizationEndpoint and the TokenEndpoint are described in the Canvas documentation. The Authorization enpoint is a GET request to login/oauth2/auth. As you can see, there are various parameters that can be passed in but we don't really need any of these in this case.


The Token endpoint is a POST request to login/oauth2/token. Again, there are various parameters that can be passed in but we don't really need any here.


The UserInformationEndpoint was the hardest endpoint to work out. It is not explicitly mentioned in the documentation. There is a mention in the OAuth overview to setting scope=/auth/userinfo. I couldn't get that to work but I may have been overlooking something simple. In the end it became apparent that we would need an endpoint that returned some user information in JSON format. There is an API call that does just that: /api/v1/users/self 


The AuthorizationEndpoint and the TokenEndpoint are handled automatically by the OAuth service in the web app. The UserInformationEndpoint is called explicitly in the OnCreatingTicket event. But before we get there we need to make sure that we SaveTokens and Map a JSON Key to something that we'll eventually get back when we call the UserInformationEndpoint.  Here we are mapping the user id and name Canvas.


That brings us on to the Events. There are several events that can be coded against including an OnRemoteFailure event. For simplicity's sake we've just used the OnCreatingTicket event which, as it's name suggests, occurs when Canvas has created a ticket and sent it back. 


In this event we set a new HttpRequestMessage variable to call the UserInformationEndpoint with a GET request. We need to add Headers to the request. The first tells the request to expect a JSON object. The second is the Access Token that Canvas has sent back to the web app for this user.


All that is left to do set a response variable to get the values back from Canvas for user information, we call the EnsureSuccessStatusCode to make sure we got a good response back, parse the JSON with user info and then run RunClaimActions to allocate name and id into the web app's authentication.


There is one other thing that we need to do on the startup.cs class. There is a public void Configure method in which we tell the app to use various tools and resources. In this file we need to add app.UseAuthentication() to tell the app to use Authentication. This call should come before the app.UseMVC() call.


Use Authentication

So, now the app is set up to use OAuth with Canvas. We just need a situation to invoke it and show the outcome.


To do this we will create a LogIn action in a new Controller. So create a new Controller class in the Controllers folder and call it AccountController.cs. In this controller we will add a LogIn Action.


Account controller


This Action will be called when the browser makes a get request to the Account/Login path. It returns a Challenge response which effectively kicks off the process of going to Canvas and authenticating which is what we just configured in startup.cs.


To call this Action I've added a link to the Shared/_Layout.cshtml file so that it appears on every page.

Login link

This basically renders as a link to the Login Action of the Account controller.


Now to see whether the user has successfully logged in and what their name is I've modified the Home/Index.cshtml file as follows: 


Index page with log in details

If the user is logged out the page will say "Not logged in". If the user is logged in the page will say "Logged in XXXX" where XXXX is the user's name in Canvas.


Step 4 - Test


Now when we run the application we get a plain looking standard web page but it does have a Log in with Canvas link and a statement saying we are not currently logged in.

Testing the integration

When we click the Log In with Canvas link we get sent to the Canvas Log in page (assuming we are not already logged in to Canvas). 


Testing the integration - Canvas login


The user is then asked to agree to authorize the calling web app. Note that the name, icon and other details are all configurable within the associated Canvas Developer key.




After which they are the taken back to the web app having been authenticated. Completion


Note that in this containerized instance of Canvas the default admin user has '' set as their name which is why an email address is being shown. This would normally be their proper name in Canvas.


Summing up

If you are an ASP.NET Core developer looking to use OAuth with Canvas then this will, hopefully, have provided a starting point for you to get your own integrations working. It was a bit of struggle at times but half of that was returning to ASP.NET after some time away so there's been a fair bit of relearning done as well as quite a bit of new learning. I'm sure there are a heap of improvements that can be made. I'd love to hear suggestions.

Canvas Tips & Tricks


During our migration from D2L to Canvas, we've identified various tips and tricks, and other resources, that may be helpful as you learn how to design and facilitate your courses within the Canvas LMS. This is a living document with new resources being added as the migration continues.

Homepage and Navigation Bar

  • The left navigation bar in your Canvas course can be edited to simplify navigation for students. Check with your ID on which links should be hidden from students. 
  • If an item in the left navigation bar is grayed out, the instructors can access it by clicking on it. Students will not be able to see it. 
  • There is a Syllabus button in the left navigation bar in your Canvas course. The online courses are not using the Canvas syllabus and will use a Canvas syllabus page that can be added to a module and easily edited. Ask your ID how to hide the Canvas Syllabus link in the left navigation bar.


  • Assignments Tool video
  • You can go into your course in the student view and submit an assignment. Then you can leave the student view and see how the assignment looks and test grading it.  


  • When you create announcements, the new announcement will be at the top of the announcements list on the homepage with the older announcements below it.


  • Instructors must make changes/updates to discussions. If an Instructional Designer or anybody other than the instructor makes changes to a discussion, it shows the edit as a post and includes the name and icon of the person who made the edit.
  • Canvas discussions are arranged by chronological order. If you want the discussions to be in the same order, make sure you pin your discussions. 
  • By default, students are able to create discussion topics. Instructors must change this in Settings > Course Details > More Options.
  • The default setting for discussions is not the threaded discussion. To have a threaded discussion, choose the threaded reply option. If you have any questions, you can contact your instructional designer.  

Top 10 Tips Canvas Tips & Tricks


1. Hide unnecessary navigation items.

• Only necessary items are Home, Syllabus, Modules, and Grades

• If you use Announcements you will want to include that as well

• If you want students to be able to see the list of their classmates or self-enroll in groups you’ll want to include the People page


2. Build Chronological Modules

• Modules can be organized per week, topic, or theme

• When you name your module, include a topic key word or phrase as a subtitle so the students know the topic (e.g. Module


3: Desert Irrigation)

• Make sure to include all course components in a module so students can find them


3. Use Text Headers and Indentation to subdivide modules

• If your modules are long, you may want to consider creating a Page to consolidate items


4. Include a Course Resources module for items that don’t fit in a week/topic

• This could also include a page pointing your students to Canvas instructions


5. Write a brief course introduction and attach your Syllabus

• Once you’ve added the link to your Syllabus file, you can use the “Auto-open inline preview” function to let students see the syllabus without downloading


6. Use headers to organize information anywhere that you enter text

• Get comfortable with the Rich Content Editor since it is commonplace

• Keep your design simple and organized


7. Set due dates on all assignments

• Due dates feed into the Syllabus, Calendar, and To-Do List. Including dates is important not only for letting students know when it is due, but also helping them easily locate their assignments

• You can use the “Undated” area of the Calendar page to identify any assignments that you haven’t given a due date

• Use the “Available Until” date to set a hard deadline. Students will not be able to submit at all after this date has passed


8. Create “On Paper” or “No Submission” assignments for classroom assignments and activities

• These items should still have due dates


9. Embed resources rather than linking outside of Canvas to avoid distractions

• Sites like YouTube and Reddit know what kinds of content will grab your students’ attention. If you send them to those sites they are very likely to get distracted


10. These are generic guidelines. To identify more specific areas where you can improve your course design, contact your Instructional Designer


Optimize File Size

Each course has a limit of 450 MB. Here are some tips to optimize the size of your PowerPoint, Word and .pdf files and using our Box system (unlimited storage) with Canvas.

Here is a handy video on reducing file size for Word, PP, and .pdf files:

Managing course data

These steps will help you stay within your course quota of 450 MB.

If you decide to use Box here is a short instructional video on how it can be quickly done:

Sharing a Box folder in Canvas


Use Box with Canvas
Box is a perfect partner with Canvas for storing and sharing files in a course. You can share an entire folder of items to numerous Canvas courses and make changes/updates in just your Box folder. All references in courses are updated immediately and simply.
Install the Box app on mobile devices for better .pdf viewing.

Helpful resource:

original Canvas 101 for Instructors

Course Modules: original Canvas 101 for Instructors 

This is a self-paced Canvas Instructor Orientation course designed to familiarize instructors with the basic need-to-know tools and features of Canvas in an effort to prepare them for course design and delivery.



Larson reever

Assistant Professor of Practice College of Journalism & Mass Communications

A couple of days ago I decided to re-examine an issue that has annoyed me several times, the lack of a Question Bank API. The process began with some postings to followup the question raised by Jared Chapman in


This lead to some Tampermoney scripts (such as that enabled me to add question banks. This success lead to a desire for further automation and this lead me to investigate the combination of Puppeteer ( and Node to run Javascripts to create a question bank- the first result was Using Puppeteer to insert new question bank: Chip sandbox. After seeing what Puppeteer could do I expanded upon the program to have a program that would not only let me add question banks but would also output a JSON file of the resulting set of question banks (which of course could then be used with the Canvas API to insert questions into question banks!). The resulting program is documented at


Some obvious programs that could be derived from this last script would be:

  • to read a JSON file or CSV file that contains a list of question banks that one wants to create along with the course_id to create them in
  • a program to simply output the JSON for the existing question banks in a course

Of course, the real solution is to add a Question bank API. Meanwhile, quite a lot can be done despite the lack of such an API.


Once again I wish to thank James Jones and the many others who have provided examples of JavaScript and Puppeteer scripts. I still do not know how to use Puppeteer (well) and lack confidence in navigating DOM structures (especially of pages that have elements that lack IDs or that dynamically modify the Javascript on a page).

Embulk is an open-source bulk data loader that helps data transfer between various databases, storages, file formats, and cloud services. github contributors


Simply put, Embulk makes importing gzipped CSV files into any RDBMS* and managing the data and workflow necessary for Canvas Data using command line tools easy, really easy, specifically solving issues we experience working with Canvas Data without fancier tools.


with support for

Linux, OSX, Windows

MySQL, MS SQL Server, Oracle, PostgreSQL, RedShift


* Embulk goes beyond SQL, List of Embulk Plugins by Category


and features useful for Canvas Data

  • Decode gzipped files
  • The ability to intelligently guess the format and data types of CSV files
  • Parallel execution of tasks, multi-threading per CPU core, and a task for each batch file
  • Input CSV Plugin as default Input for Embulk
  • Filter data with Filter Plugins,
  • Output Data to SQL
    • Insert, Insert Direct, Replace, Merge, Truncate and Truncate Insert
    • Timestamp formatting
    • TimeZone conversion from UTC for date time columns
    • before_load and after_load, config options to run queries before (truncate) and after import (indexes)
    • and more


Embulk uses YAML config files for each task, for Canvas Data this means each input source (table files) and it's output destination (db table) is 1 file. This includes differences between staging, test and production destinations. I imagine your workflow and setup will be different than mine and many others. You may only need a few tables, or only have one database, or you could simply use Embulk to manage, manipulate, filter and possibly join CSV files to examine with Tableau if that's your thing. For this reason, I have only shared each set of config files for MySQL, MSSQL, Oracle, and PostgreSQL. I have not worked with RedShift.


Our old workflow, requires that we attempt to maintain the newest data from Canvas Data for reporting, attendance, API services and automation, and LTIs. One of our biggest issues is the size of the daily batch without deltas and the growing use of Canvas within our schools and how long importing everything can take, how slow and unnecessary it is to hold 6 years worth of data for this semester, tried different things in SQL and bash to limit the data quickly for the current school year in production, never implement. LTI queries for attendance and submissions are really slow. Then some days the downloaded files are 0 bytes, we must have lost internet, or there was duplicates and the table didn't load, and it takes until 2pm to get everything loaded. Sometimes there's new columns in the table and I forgot to read the release notes and we've truncated the table before importing, and it takes hours to import. And so on.


Some of these are human, some of these are manageable.


Our new workflow uses Embulk

  1. Download with Canvas Data CLI, some of that documented here
  2. Import all CD tables using CSV in SQL out to staging environment with Replace mode, this creates temporary tables for the import, if it fails, the previous version is still intact. After successful import, Embulk will drop the old table and run the after_load queries, I use this for enumerable constraints and indexes. I left a lot of examples in the configs.

    The Requests table config uses Insert mode to append the new rows.
  3. I use staging for Tableau reporting. For production, I only need to load the tables necessary for our LTIs and API services. Some of these configs are straight copies of the staging imports, except they point to production. Some of the configs create new tables using SQL in SQL out and importing filtered or composite tables from query results using

    heres' an example


Using RHEL7, 6 CPUs with 12 cores, and 16GB Ram, Embulk imports 7.9GB of CSVs into >1TB of SQL (no requests) in less than 4.5 hours, depending on which indexes you keep in the configs.


GitHub - ccsd/canvas-data-embulk-configs: YAML configs for importing Canvas Data with Embulk


Poh Duong

Ruby GEM for REST calls

Posted by Poh Duong Jul 3, 2019

The University of Adelaide has built a REST Client GEM as part of one of our integration projects.


The GEM has now been open sourced and available via The source code can be found at


The GEM main features are:

  • Retries for API calls
  • Ability to set authentication types and automatically populating the request with auth parameters
  • Re-auth for: OAUTH
  • Getting all data for paginated endpoints. (Only if the API implements pagination with headers links)


The GEM is built as a wrapper around the rest-client GEM (


Canvas Data provides a wealth of information that can be used in many interesting ways, but there are a few hurdles that can make it hard to even get started:

  • The Canvas Data API uses a different authentication mechanism than the one that you're probably already used to using with the Canvas API.
  • Data is provided in compressed tab-delimited files. To be useful, they typically need to be loaded into some kind of database.
  • For larger tables, data is split across multiple files, each of which must be downloaded and loaded into your database.
  • The size of the data can be unwieldy for use with locally-running tools such as Excel.
  • Maintaining a local database and keeping up with schema changes can be tedious.


This tutorial will show you how to build a data warehouse for your Canvas Data using Amazon Web Services. Besides solving all of the problems above, this cloud-based solution has several advantages compared to maintaining a local database:

  • You can easily keep your warehouse up to date by scheduling the synchronization process to run daily.
  • You can store large amounts of data very cheaply.
  • There are no machines to maintain, operating systems to patch, or software to upgrade.
  • You can easily share your Canvas Data Warehouse with colleagues and collaborators.


Before we get started

Before you begin this tutorial, you should make sure that you've got an AWS account available, and that you have administrator access. You'll also need an API key and secret for the Canvas Data API.


Experience with relational databases and writing SQL will be necessary in order to query your data. Experience with AWS and the AWS console will be helpful.


Be aware that you'll be creating AWS resources in your account that are not free, but the total cost to run this data warehouse should be under about $10/month (based on my current snapshot size of 380GB). There's also a cost associated with the queries that you run against this data, but typical queries will only cost pennies.


All of the code used in this tutorial can be found in GitHub:
GitHub - Harvard-University-iCommons/canvas-data-aws: Build a Canvas Data warehouse on AWS 


AWS services we'll use

We'll use several different AWS services to build the warehouse:

  • S3: we'll store all of the raw data files in an S3 bucket. Since S3 buckets are unlimited in size and extremely durable, we won't need to worry about running out of space or having a hard drive fail.
  • Lambda: we'll use serverless Lambda functions to synchronize files to the S3 bucket. Since we can launch hundreds or even thousands of Lambda functions in parallel, downloading all of our files is very fast.
  • SNS: we'll use the Simple Notification Service to let us know when the synchronization process runs.
  • Glue: we'll create a data catalog that describes the contents of our raw files. This creates a "virtual database" of tables and columns.
  • Athena: we'll use this analytics tool along with the Glue data catalog to query the data files directly without having to load them into a database first
  • CloudFormation: we'll use AWS' infrastructure automation service to set up all of the pieces above in a few easy steps!


Let's build a warehouse!

  1. Log into the AWS console and access the CloudFormation service.
  2. Click on the Create Stack button
  3. On the next screen, leave the Template is ready and Amazon S3 URL options selected. Below, Enter this S3 URL:
    Click Next. 
  4. On the stack details screen, first enter a name for this stack. Something like "canvas-data-warehouse" is fine. Enter your Canvas Data API key and secret in the fields provided. Enter your email address (so that you can receive updates when the synchronization process runs). You can leave the default values for the other parameters. Click Next.
  5. On the stack options screen, leave all of the default values and click Next.
  6. On the review screen, scroll to the bottom and check the box to acknowledge that the template will create IAM resources (roles, in this case). Click the Create stack button, and watch as the process begins!


It'll take several minutes for all of the resources defined in the CloudFormation template to be created. You can follow the progress on the Events tab. Once the stack is complete, check your email -- you should have received a message from SNS asking you to confirm your subscription. Click on the link in the email and you'll be all set to receive updates from the data-synchronization process.


Now we're ready to load some data!


Loading data into the warehouse

Instructure's documentation for the Canvas Data API describes an algorithm for maintaining a snapshot of your current data:

  1. Make a request to the "sync" API endpoint, and for every file returned:
    • If the filename has been downloaded previously, do not download it
    • If the filename has not yet been downloaded, download it
  2. After all files have been processed:
    • Delete any local file that isn't in the list of files from the API


The CloudFormation stack that you just created includes an implementation of this algorithm using Lambda functions. A scheduled job will run the synchronization process every day at 10am UTC, but right now we don't want to wait -- let's manually kick off the synchronization process and watch the initial set of data get loaded into our warehouse.


To do that, we just need to manually invoke the sync-canvas-data-files function. Back in the AWS console, access the Lambda service. You'll see the two functions that are used by our warehouse listed -- click on the sync-canvas-data-files function.


On this screen you can see the details about the Lambda function. We can use the AWS Lambda Console's test feature to invoke the function. Click on the Configure test events button, enter a name for your test event (like "manual"), and click Create. Now click on the Test button, and your Lambda function will be executed. The console will show an indication that the function is running, and when it's complete you'll see the results. You'll also receive the results in your email box.


Querying your data

When the Lambda function above ran, in addition to downloading all of the raw data files, it created tables in our Glue data catalog making them queryable in AWS Athena. In the AWS console, navigate to the Athena service.  You should see something similar to the screenshot below:



You can now write SQL to query your data just as if it had been loaded into a relational database. You'll need to understand the schema, and Instructure provides documentation explaining what each table contains:


Some example queries:

  • Get the number of courses in each workflow state:
    SELECT workflow_state, count(*) FROM course_dim GROUP BY workflow_state;
  • Get the average number of published assignments per course in your active courses:
    SELECT AVG(assignments) FROM (SELECT COUNT(*) AS assignments 
    FROM course_dim c, assignment_dim a
    WHERE = a.course_id
    AND c.workflow_state = 'available'
    AND a.workflow_state = 'published'


Cleaning up

If you don't want to keep your data warehouse, cleaning up is easy: just delete the "raw_files" folder from your S3 bucket, and then delete the stack in the CloudFormation console. All of the resources that were created will be removed, and you'll incur no further costs. 


Good luck, and please let me know if you run into any trouble with any of the steps above!

New FERPA requirements for cross-listed courses! and others have commented on the problems of cross listing. However, the ability to do cross-listing is controlled by the same permission as being able to create/edit/delete sections. I find this to be odd. I think that the ability to cross-list should not be tied to the ability to utilize sections within a course.


As the app/controllers/sections_controller.rb has the following code (with the relevant line highlighted):

  # @API Cross-list a Section
  # Move the Section to another course.  The new course may be in a different account (department),
  # but must belong to the same root account (institution).
  # @returns Section
  def crosslist
    @new_course = api_find(@section.root_account.all_courses.not_deleted, params[:new_course_id])
    if authorized_action(@section, @current_user, :update) && authorized_action(@new_course, @current_user, :manage)   
      @section.crosslist_to_course(@new_course, updating_user: @current_user)
      respond_to do |format|
        flash[:notice] = t('section_crosslisted', "Section successfully cross-listed!")
        format.html { redirect_to named_context_url(@new_course, :context_section_url, }
        format.json { render :json => (api_request? ? section_json(@section, @current_user, session, []) : @section) }

The check for authorized actions means that:

  • Unless the current user has the ability to update sections, the cross listing will not occur.
  • Unless the current user can manage the target course otherwise, the cross-listing will not occur.

Unfortunately, cross-listing provides a sneak path to add students to a course (as a teacher without administrative rights, but with the ability to create sections does a cross-listing - the students will be added to the target course - despite the fact that teacher does not have rights to add students to the course).


Moreover, if the current user can manually add students to the target course, then they can always add each of the students from a section to the target course. This means that there needs to be a check in the above code on whether the current user can enroll students in the target course. 



Therefore, the second test should be something similar to:

authorized_action(@new_course, @current_user, :manage)  && authorized_action(@new_course, @current_user, :manage_students) 


Where (based on looking at ./app/models/role_override.rb and spec/apis/v1/enrollments_api_spec.rb) :manage_students would enable the creation of student enrollments. Thus unless the user is allowed to enroll students in the target course, the cross-listing would not occur.


If the permission to add students ( permission: Users - add / remove students in courses) to imported courses (i.e., courses that are automatically created by SIS  imports) is disabled for the "Teacher" role, there should not be a problem in allowing teachers to create/edit/delete sections - while still meeting FERPA and other similar regulations (as there would not be any ability to cross-list a section worth of students and each section would only be within a single course). In fact, the use of sections could be used to reduce the visibility and interactions of students (and even teachers) to within their section, thus advancing student's privacy.

David Lyons

Module Filters

Posted by David Lyons Employee Jun 11, 2019

Disclaimer: This is a personal project, and is not endorsed by Instructure or Canvas LMS. Custom JavaScript must be maintained by the institution.

Most (great) product features come from a deep understanding of customers’ problems. It’s tempting to build every “good” or “obvious” feature someone can describe passionately but that leads to thoughtless bloat that breaks the UX. And most things people describe as “obvious” actually have 10,000 questions between the comment and a well researched/tested feature.

Sometimes the stars align and a conversation with an insightful person includes an offhanded “wouldn’t it be neat” comment that’s small enough to quickly prototype and test. And those are just the circumstances that led to this experiment: Module Filters.

Behold! Content filters!

The comment, which was part of a much larger conversation on organization and navigation, was

“Wouldn’t it be neat if you could filter by content type right on the Modules page in Canvas?”

and I agreed. Because Canvas supports custom JavaScript I was able to quickly mockup a functioning prototype for all-important testing and validation.

This project was a good candidate for me to experiment with because it’s

  1. small in scope
  2. technically possible
  3. UI/UX not immediately obvious

Small in Scope

Small changes a person/team can wrap their hands all the way around are ideal for quality, and ensuring it actually addresses the problem. Feature creep is very real though, and I had to repeatedly slap my own hand and say “No! That’s not part of what is being tested here!” Keeping things in scope is tough in the face of the endless waterfall of “wouldn’t it be neat if it also…

Technically Possible

What I mean by technically possible is that 1. the idea is literally possible at all and 2. within my ability to develop. JavaScript is great for uses exactly like this and Canvas allows for this kind of thing, and while the scope of the idea is small, if I knew nothing about HTML/CSS/JavaScript and had to learn all of that first the overall project would have been a somewhat larger commitment.


This is where the bulk of the work (and my excitement for the idea) went. “Filters” in apps don’t have a universal UI: sometimes they’re checkboxes, or a dropdown menu, or toggles, or happen automatically while typing, etc. None of those is right or wrong, and it depends on the situation which direction one might lean. My first version actually uses unstyled checkboxes with labels (which looked awful) just to make sure my code worked. Thinking about the UI/UX also helped me with feature creep in that the UI for a filter like content works well as checkboxes because a user might want any number of filter combinations on/off, but they wouldn’t work well to toggle a single binary state like “has due date”, for example. One might even want different types of filter simultaneously which requires a lot of additional considerations.

Ultimately I settled on an on/off toggle using the corresponding content icon instead of a checkbox with a label to support any combination of content types to be shown/hidden, and to avoid adding text to the app UI. Keeping the filters to just content type made the UI more approachable and let me focus on the UX of how it might be to actually use this feature.

Try It and Tell Me What You Think

I put the code on Github with an MIT license. If you play with it I’d love to hear your thoughts either on the repo or on twitter.

In the 2019-04-20 release notes, one of the bug fixes was: “The Copy a Canvas Course option uses active term dates to display available courses in the drop-down list.” Recently, it came to our attention in Emerson College’s Instructional Technology Group, that this bug fix had the side effect of removing past courses from this drop-down list unless the “Include completed courses” box is checked.


The list of courses in the "Select a course" dropdown for copying a Canvas course now changes when "Include completed courses" is checked.

Since we’d all gotten used to past courses appearing in the list whether or not this box was checked, the change caused us to assume that the drop-down was broken. Based on the comments by Christopher Casey, Rick Murch-Shafer, Chris Hofer, and Joni Miller in the release notes thread, we aren't the only ones who ignored this checkbox until now.


Almost all of the times our faculty use the course copy tool, it’s to copy from a past semester to the current one. To prevent confusion due to the new functionality, we decided to force the “Include completed courses” to be box checked by default.


Demonstration that choosing "Copy a Canvas Course" now results in the "Include completed courses" checkbox being checked by default.


Here’s the code I used to make this happen. I’m happy to help others get this working in their custom js files too!


Edited to add: Check the comments for more efficient and concise code for this. I'm leaving the original version here for the thought process breakdown.


I started by writing a helper function to do the actual work of checking the box:


* Check the "Include completed courses" box on course import screen.
* NOTE: If the checkbox ID changes in future versions of Canvas, this
* code will need to be adjusted as well.

function checkCompletedCourses() {
  var completedBox = document.getElementById("include_completed_courses");

  if ((typeof completedBox !== 'undefined') && (completedBox !== null)) {
    // Set the checkbox value
    completedBox.checked = true;
    // Trigger the change event as if the box was being clicked by the user;


Inside the document ready function in our custom js code file, I already had a variable for running code only on specific pages. I added an additional regular expression to check for the Import Content page in a course.


var currentCoursePath = window.location.pathname;
var importPattern = /(\/courses\/[0-9]+\/content_migrations)$/i;


Since the “Include completed courses” checkbox doesn’t exist until the “Copy a Canvas Course” option is selected, I set up a MutationObserver to monitor the div that this checkbox gets added to.


if (importPattern.test(currentCoursePath)) {
  var importBoxObserver = new MutationObserver(function(mutations) {
    mutations.forEach(function(mutation) {

  importBoxObserver.observe(document.getElementById("converter"), {
    childList: true


So far this is working for us and we’re hoping it’ll prevent extra pre-semester stress once faculty are back on campus for the Fall.

I'm trying to make standards-based grading more approachable for my teachers. When I was teaching full time, I held to Frank Noschese's Keep It Simple philosophy. Single standards correlate to single assignments that are scored as pass/fail. Now, I averaged these out on a weighted scale to calculate a 0-100 grade, but that's for another post


Using Canvas, I was able to set up a functional reassessment strategy to aggregate demonstrations of proficiency.

The Learning Mastery Gradebook in Canvas does not translate anything into the traditional gradebook. This mean that every week or so, I would have to open the Mastery report alongside the traditional gradebook and update scores line by line. This was tedious and prone to error.


Using the Canvas API and a MySQL database, I put together a Python web app to do that work for me. The idea is that a single outcome in a Canvas course is linked with a single assignment to be scored as a 1 or 0 (pass/fail) when a mastery threshold is reached.


The App

Users are logged in via their existing Canvas account using the OAuth flow. There they are shown a list of active courses along with the number of students and how many Essential Standards are currently being assessed (ie, linked to an assignment).


Teacher Dashboard

The teacher dashboard



Single Course

In the Course view, users select which grading category will be used for the standards. Outcomes are pulled in from the course and stored via their ID number. Assignments from the selected group are imported and added to the dropdown menu for each Outcome.


Users align Outcomes to the Assignment they want to be updated in Canvas when the scores are reconciled. This pulls live from Canvas, so the Outcomes and Assignments must exist prior to importing. As Assignments are aligned, they're added to the score report table.


Score Reports

Right now, it defaults to a 1 or 0 (pass/fail) if the Outcome score is greater than or equal to 3 (out of 4). All of the grade data is pulled at runtime - no student information is ever stored in the database. The Outcome/Assignment relationship that was created tells the app which assignment to update for which Outcome.

When scores are updated, the entire table is processed. The app pulls data via the API and compares the Outcome score with the Assignment grade. If an Outcome has risen above a 3, the associated Assignment is toggled to a 1. The same is true for the inverse: if an Outcome falls below a 3, the Assignment is toggled back to a 0.


I have mixed feelings about dropping a score, but the purpose of this little experiment is to make grade calculations and reconciliation between Outcomes and Assignments much more smooth for the teacher. It requires a user to run (no automatic updates) so grades can always be updated manually by the teacher in Canvas. Associations can also be removed at any time.



To speed up processing, I use a Pool to run multiple checks at a time. It can process a class of ~30 students in under 10 seconds. I need to add some caching to make that even faster. This does not split students into sections, either. 


I've started turning this into an LTI capable app which would make it even easier for teachers to jump in. If you're a Python developer, I would really appreciate some code review. There is definitely some cleanup to be done in the functions and documentation and any insight on the logic would be great.


The source for the project is on GitHub.

During 2019 I have been trying to use Canvas to help support the degree project process (for students, faculty, and administrators). One of the latest parts of this effort has been to look at some of the administrative decisions and actions that occur at the start of the process. A document about this can be found at (a PDF is attached). The code can be found in SinatraTest21.rb at This code makes use of user custom data in conjunction with a dynamic survey (realized via an external LTI tool) and the administrative decision and action part of the process utilizes custom columns in the gradebook and automatically creating sections and adding a given student to the relevant section.


The Ruby code in LTI tool uses a token to access the Canvas API from the LTI tool to put values into the custom columns in the gradebook - this is probably not the best approach, but worked for the purpose of this prototype.

James Jones suggested that I should file a blog post about my recent findings/results.


I just recently started with Canvas because Uppsala University has decided to use it as its upcoming LMS platform after a failed attempt with another product. Therefore I had already spent some time with Blackboard and was quite fond of the calculated questions type in quizzes. I quickly found out that Canvas offers essentially the same functionality but a bit less comfortable.



A calculated question or Formula Question as it is called in the interface of Canvas is based on a table of pre-generated variable values and corresponding results. In the general case the variables are defined and the target function is entered using the web interface, then Canvas calculates random number values for the variables and the resulting answer value. However, as the designer you have no possibility to influence the variable values afterwards (unlike in Blackboard where you have a spreadsheet-like interface). Also, in Canvas, the equation cannot be altered once it has been entered - and the supported syntax is not very convenient for more complex problems.
I was also missing the ability to give a relative tolerance for the correct answers in a question, however, I found out that entering a percentage-sign exactly gives this behavior even though it does not seem documented anywhere.


Solution or problems?

My hope was then for the API, since it seemed to support the creation of questions. But even though there is a Python library for the purpose of controlling Canvas, many of the functions are not very well documented. My first tries failed miserably but finally I was on the right track.


The cause of my problems was that the Canvas API uses different field identifiers and structures when creating a calculated question as when you retrieve the contents of an already existing question, as I of course did in my attempts to reverse-engineer the interface.


Working solution

Here is now an example for a working solution to give you full control over the generation of Formula Qeustions using Python and the canvasapi library. The example is in Python 3 and creates a question from the field of electronics - the voltage in a voltage divider. The Python script generates the variables, fills the variables with random numbers from a set of predefined, commonly used values. I tried to write the script more for readability than any pythonic optimization.

from canvasapi import Canvas
import itertools
import random

API_URL = ""
API_KEY = <your api key here>

canvas = Canvas(API_URL, API_KEY)

# create a calculated_question
# example of a potential divider
#  U2 = U0 * R2 / ( R1 + R2 )

E3  = [1, 2, 5]
E6  = [1.0, 1.5, 2.2, 3.3, 4.7, 6.8]
E12 = [1.0, 1.2, 1.5, 1.8, 2.2, 2.7, 3.3, 3.9, 4.7, 5.6, 6.8, 8.2]

coursename = 'test'
quizname   = 'test'

# define the input variable names
#   each variable has its own range, format and scale
variables = \
        'name':   'U0',
        'unit':   'V',
        'format': '{:.1f}',
        'scale':  '1',
        'range':  [1.2, 1.5, 4.5, 9, 12, 24, 48, 110, 220]
        'name':   'R1',
        'unit':   'ohm',
        'format': '{:.1f}',
        'scale':  '1',
        'range':  [ i*j for i, j in itertools.product([10, 100, 1000], E12)]
        'name':   'R2',
        'unit':   'ohm',
        'format': '{:.1f}',
        'scale':  '1',
        'range':  [ i*j for i, j in itertools.product([10, 100, 1000], E12)]

# how many sets of answers
rows = 30

# create an empty list of lists (array) for the values
values = [ [ i for i in range(len(variables))] for _ in range(rows)]

# create an empty list for the calculated results
results = [i for i in range(rows)]

# fill the array of input values with random choices from the given ranges
for i in range(rows):
    for j in range(len(variables)):
        values[i][j] = random.choice(variables[j].get('range'))

    # and calculate the result value   
    results[i] = values[i][0] * values[i][2] / (values[i][1]+values[i][2])

# format the text field for the question
#   an HTML table is created which presents the variables and their values
question_text = '<p><table border="1"><tr><th></th><th>value</th><th>unit</th></tr>';
for j in range(len(variables)):
    question_text += '<tr>'
    question_text += '<td style="text-align:center;">' + variables[j].get('name') + '</td>'
    question_text += '<td style="text-align:right;">[' + variables[j].get('name') + ']</td>'
    question_text += '<td style="text-align:center;">' + variables[j].get('unit') + '</td>'
    question_text += '</tr>'
question_text += '</table></p>'

# format the central block of values and results
answers = []
for i in range(rows):
          'weight': '100',
              'name': variables[j].get('name'),
              'value': variables[j].get('format').format(values[i][j])
            } for j in range(len(variables))
          'answer_text': '{:.5g}'.format(results[i])

# format the block of variables,
#   'min' and 'max' do not matter since the values are created inside the script
#   'scale' determines the decimal places during output 
variables_block = []
for j in range(len(variables)):
          'name':  variables[j].get('name'),
          'min':   '1.0',
          'max':   '10.0',
          'scale': variables[j].get('scale')

# put together the structure of the question
new_question = \
      'question_name':           'Question 6',
      'question_type':           'calculated_question',
      'question_text':           question_text,
      'points_possible':         '1.0',
      'correct_comments':        '',
      'incorrect_comments':      '',
      'neutral_comments':        '',
      'correct_comments_html':   '',
      'incorrect_comments_html': '',
      'neutral_comments_html':   '',
      'answers':                 answers,
      'variables':               variables_block,
      'formulas':                ['automated by python'],
      'answer_tolerance':        '5%',
      'formula_decimal_places':  '1',
      'matches':                 None,
      'matching_answer_incorrect_matches': None,

courses  = canvas.get_courses()
for course in courses:
    if == coursename.lower():
        print('found course')
        quizzes = course.get_quizzes()
        for quiz in quizzes:
            if quiz.title.lower() == quizname.lower():
                print('found quiz')

                question = quiz.create_question(question = new_question)      

Since this is mostly the result of successful reverse engineering and not based on the actual source code of Canvas the above example should perhaps be used with care, but for me it is what I needed to create usable questions for my students. Perhaps this could also serve the developers as an example on how the interface for calculated questions could be improved in the future.


How does it work?

The dictionary variables (lines 26-49) contains the names and ranges of the variables, as well as formatting instructions. The ranges are given as lists. In lines 61-66 the random values are generated and the results calculated from these values. Lines 70-77 create a rudimentary table to be included in the question text containing the variables and their values as well as physical units for this particular question. Lines 80-93 finally assemble the variable/answer block and lines 109-128 put everything together into the dictionary to create a new question.

The script then inserts the question into an existing quiz in an existing course in line 140.


After running the script

This screenshot shows the inserted question after running the script, obviously this would need some more cosmetics.

inserted question inside the quiz after executing the script

And when editing the question this is what you see:

editing the question

Be careful not to touch the variables or the formula section since this will reset the table values.



In order to be presentable to the students the above questions needs some cosmetics. What is to be calculated? Perhaps insert a picture or an equation? More text?

after editing, but still inside the editor

After updating the question and leaving the editor it now looks like this in the Canvas UI:

the modified question inside the quiz


Seeing and answering the question

When you now start the quiz, this is how the question looks:

the question as it is seen by the student


  • calculated_questions can be generated using the Python canvasapi library
  • answer values have to be provided with the key 'answer-text'
    'answers': [
         'weight': '100',
         'variables': [
         {'name': 'U0', 'value': '9.0'},
         {'name': 'R1', 'value': '5600.0'},
         {'name': 'R2', 'value': '5600.0'}],
         'answer_text': '4.5'},


  • when querying an existing calculated_question through the API the answer values are found with the key 'answer'
        {'weight': 100,
         'variables': [
          {'name': 'U0', 'value': '110.0'},
          {'name': 'R1', 'value': '82.0'},
          {'name': 'R2', 'value': '8200.0'}],
         'answer': 108.91,
         'id': 3863},


  • when supplying an equation for the 'formular' field this has to be done in a list, not a dictionary
     'formulas':  ['a*b'],


  • when querying an existing calculated_question through the API the equations are found in a dictionary like this:
     formulas=[{'formula': 'a*b'}],