[Account Settings] Block AI Agents from Logging in to Canvas

Problem statement:

It's shockingly simple to log into Canvas and then have a browser agent like Comet or ChatGPT's new browser do all the work for you inside of Canvas without looking at a single page. In essence, this means that Canvas isn't supporting the integrity of the learning process. It seems like this should be a top, if not the number 1 priority for a company whose entire business exists on the fundamental premise that it helps teachers and students in the learning process. This issue was posted before, but for some reason it was marked as "Will Not Consider" - but it's hard to justify being in education if Canvas is not willing to protect student learning and support the verification of human interactions in the platform. While cheating may be one issue, it's not just that. This also means that the bots can go in and scrape private data and harvest information about users that they did not agree to share publicly beyond the confines of their class. Of course this then leads to FERPA violations and a whole other host of issues.

Proposed solution:

Block access to Canvas for any AI Agents that haven't been specifically approved by admin for each instance of Canvas.

User role(s):

instructor,student

1 Comment
Renee_Carney
Community Team
Community Team
Thanks for raising this, we completely understand where you’re coming from (and honestly, we feel this pain too).
 
You’re calling out two different challenges:
  1. Automation happening within a legitimate, user-authenticated browser session, for example, a student using an AI-powered extension like HomeworkHelp.
  2. Bots or scripts trying to access Canvas outside of a normal login or through API misuse.
Both raise good questions about privacy, integrity, and control. For the latter, institutions can already restrict student API token creation, which helps prevent most unauthorized automated access. And from a FERPA standpoint, this doesn’t introduce a new exposure - it’s more about making sure permissions and configurations are set up correctly.
For the classroom side of this, there are ways we can all work together. Our integrations with proctoring and lockdown browser partners have improved, and those are great options for higher-stakes exams. We also encourage course and assessment designs that focus on authentic, higher-order tasks, areas where AI tools are less effective shortcuts and more learning partners.
We’re continuing to build on our AI principles of privacy, transparency, and institutional choice, and we love the idea of expanding AI literacy for students and educators, helping everyone understand appropriate use cases. We’re also talking with AI partners about responsible use and how to collectively discourage misuse that undermines academic integrity.
Appreciate you surfacing this. These are exactly the kinds of conversations that help shape thoughtful, practical approaches to AI in education