At some point in your experience with Canvas, you, your colleagues, or your students may have seen an announcement in Canvas with an invitation to participate in a survey. If you followed the link you were asked to answer the following question: “How likely is it that you would recommend Canvas to a friend or colleague?” If you’ve answered that question, you’ve given feedback to Product Management through one of our many input channels. If you’ve dismissed this announcement, we hope the following added detail will encourage you to participate in the future!
How we administer the survey
Every 30 days, we randomly invite 1 out of every 12 instructors/admins and 1 out of every 24 students. Participation is voluntary, and people will only ever receive one invitation per calendar year.
The survey consists of seven questions:
- 2 demographic questions: age screener and user role
- 2 overall Canvas satisfaction questions: one scale and one text entry
- 1 - 3 product specific questions: one randomly selected user goal with an importance scale; depending on the rating a possible satisfaction scale and text entry follow-up question.
What we look for in the survey
- Direct unfiltered feedback: We want to hear directly from instructors, admins, and students in Canvas.
- Feature and functionality feedback: We seek to understand the importance of and satisfaction with the features and functionalities in Canvas.
- Product development insights: The feedback collected is joined with other channels, such as feature ideas, support tickets, and Customer Success Manager cases, to inform decision-making.
Why this survey matters to all of us
We want Canvas to be more than just the LMS at your school, we want it to be a product that you love to use. The feedback in this survey is a direct channel from Canvas users to Canvas creators and is one of the most effective ways to get broad/diverse input. As Canvas improvements are evaluated and prioritized, this feedback is highly valued in this decision-making process.
We are listening to all of our feedback channels and we appreciate every user that takes the time to share through them. Thank you for your interest in learning more about this particular channel. For those of you that want to dive even deeper, please keep reading below. We look forward to your participation!
- Delivery: Canvas global announcement with a link to a Qualtrics hosted survey
- Announcement Text:
Canvas wants to know what you think.
At Canvas, our goal is to be the best LMS available. To do this, we need your help. Please take this one-minute survey and tell us what’s most important to you.
- Group Selection: randomly selected from Admins, Instructors, and Students in Canvas
- Question Structure:
- Seven questions: 2 multiple choice, 2 - 3 scale, 1 - 2 text entry
- Product questions have a randomizer--not every survey participant will see the same choices
- Estimated time: 1 minute
Explore the full survey experience by following this link to a replica survey: https://Instructure.qualtrics.com/jfe/form/SV_diGXZhmYkPu08JL
The Net Promoter Score (NPS) question, “How likely is it that you would recommend Canvas to a friend or colleague?” is the first question seen beyond the demographic questions. The NPS is an industry standard metric, which uses a scale of 0 (not at all likely) to 10 (extremely likely) where respondents are grouped as follows:
- 9s and 10s are Promoters—the most enthusiastic users; extremely likely to recommend Canvas.
- 7s and 8s are Passives—or passively satisfied users.
- 0–6 are Detractors—dissatisfied users; unlikely to recommend Canvas.
Subtracting the percentage of Detractors from the percentage of Promoters yields a score of -100 to 100; the lowest score would mean everyone rated 0 - 6 and the highest score would mean everyone rate 9 or 10. This calculation allows us to compare your current satisfaction to past satisfaction. It also lets us know how well we are doing compared to other industry leaders.
For a more detailed view of user satisfaction, we conduct an analysis based on a modification of the Kano Method. We calculate the average difference between the importance of each user outcome and user satisfaction with Canvas' functionality. This approach helps us identify user outcomes with the largest differences or the areas in Canvas where we are not meeting your expectations. Then on a biannual cadence, we read through the outcome specific comments to see how we should plan and prioritize improvements.
The free-form responses are also analyzed in a group exercise where 20+ employees sift through a random selection of hundreds of responses using the KJ method. The results help us understand users’ outcomes with Canvas.
To summarize, we’re listening!