AI Nutrition Facts: Making Informed Decisions About Your AI Tools

zachp
Instructure
Instructure
3
1843

Instructure.png

It seems like everyone’s buzzing about the capabilities of artificial intelligence (AI), and Instructure is no exception. At Instructure, we are committed to improving teaching and learning, and believe AI technology can help increase educator efficiencies, improve learning outcomes, and increase student engagement and empowerment when applied in a thoughtful, strategic, and ethical way. With these goals in mind, we are integrating AI technology within our own products and platforms and also leveraging our vast partner ecosystem of AI enabled tools to deliver the best teaching and learning experiences.

Decisions about where and how to integrate AI into education are not something we take lightly– and you shouldn’t either. That’s why we’ve embraced the software industry trend and will be providing AI Nutrition Facts on first- and third-party AI features. These Nutrition Facts help you understand exactly what you’re getting with each feature, how your data is being used, and enable you to make informed decisions about which AI features and tools you want to use. Read on to learn how Instructure’s AI Nutrition Facts will help you build the right AI-enabled platform for your institution. 

What are AI Nutrition Facts?

Simply put, AI Nutrition Facts mirror the nutrition facts that you’d find on packaged food. They aim to provide customers full transparency about what’s inside. Careful vetting of all edtech tools is crucial for building a healthy organization ecosystem. Use the AI Nutrition Facts to ensure that a given tool matches your organization’s AI vetting criteria and use case. You can find our first-party Nutrition Facts in our AI in Education community group, and third-party Nutrition Facts on the Emerging AI Marketplace

 

NF1.png   NF2.png

 

NF3.png

 

Using the AI Nutrition Facts Label

The first section, Model & Data, provides information to help you understand where the AI output comes from.

ModelData.png

  • Base Model – Indicates which version/model the functionality is built on and describes the tool’s capabilities, performance, and general applicability.
    • Guiding questions: Did you build this LLM or are you using a model from a vendor? What are the benefits and use cases of this model? Does this bring up any equity issues?
  • Trained with User Data – Indicates if customer data was used to train the base model.
    • Guiding Questions: Is my students’ data being used to train this model for other institutions? Does this bring up any equity issues? Does this imply a need for stringent data governance policies?
  • Data Shared with Model – Indicates what customer data is sent to model during training or processing.
    • Guiding questions: What data does the model use to generate a useful, relevant output? Does this raise any concerns about data privacy and compliance?

 

The next section, Privacy & Compliance, provides information that determines whether student and institution data is safe.

PrivacyCompliance.png

  • Data Retention – Indicates how long data related to the feature will be retained.
    • Guiding Questions: How long does this AI feature store my institution’s data?
  • Data Logging – Indicates if the feature provides logging tools that help the provider or user understand what output the AI has produced and how it was produced.
    • Guiding Questions: What records are you keeping of queries and their outputs? What are the risks if data is logged unintentionally?
  • Regions Supported – Indicates the geographical locations where the AI tool is permitted to operate. 
    • Guiding Questions: Is the tool compliant with local regulations and accessible to all intended users?
  • PII (Personal Identifiable Information) – Indicates if PII is exposed and where.
    • Guiding Questions: What PII, if any, is exposed and in what environment does that occur?

 

The final section addresses how you should use the tool’s output.

Outputs.png

  • AI Settings Control Indicates whether there are settings to control the availability and use of the AI functionality.
    • Guiding Questions: Can I control if the AI feature can be turned on or off?  Who controls the functionality and for which users?
  • Humans in the Loop – Indicates whether AI-driven decisions can be modified, verified, and corrected by a human.
    • Guiding Questions: Are there tools to review/change/block the output of AI? What are the potential impacts if a mistake is made?
  • Guardrails – Indicates any safety mechanisms put in place to prevent undesirable outcomes, biases, or harmful human actions.
    • Guiding Questions: Who is my audience? What guardrails are in place to ensure appropriate outputs? 
  • Expected Risks – Indicates any expected risks associated with use of the tool.
    • Guiding Questions: What are some known shortcomings of this feature? What are some potential security risks?
  • Intended Outcomes – Provides the intended outcome(s) of the tool, such as improving learning efficiency, providing personalized feedback, or streamlining administrative tasks.
    • Guiding Questions: What are the benefits of using this feature? What outcomes are expected, and how will you measure them?


More Questions? For more information, visit our Instructure Community to find AI resource documents, recent AI product updates, and other AI blog posts. You can also stop by our AI Resource Hub for more insights and check out our InstructureCon session recording on the innovative and intentional use of AI.

3 Comments
ANDREWJONES208
Community Participant

Zach,

 

As an educator and Canvas Course user and creator, I would like to gain more training and insight on how to process and utilize these "nutrition facts" to personalize learning and adjust course development. Any tips on how and where to gain training and expertise for professional growth?  Is there an Instructure University?  I did check out the InstructureCon link: Innovation, Meet Intention: Using AI in Canvas to Elevate Teaching, Learning, a...

 

Thanks!

Andrew

zachpendleton
Community Participant

@ANDREWJONES208 Thanks for reaching out! The nutrition facts are designed as a first step in an AI program — they should answer the question, "do I feel comfortable turning this tool on for my insitution/classroom/activity?"

As you look to personalize learning content and adjust the way that you develop materials, the most relevant pieces of the nutrition facts card are the "Intended Outcomes" and "Expected Risks" sections as they will describe what the feature is meant to do and where it may fail.

For additional resources, I recommend that you keep an eye on our AI Resources Hub, where we'll continue to post announcements, information, and tips. I also really like Ethan Mollick's One Useful Thing blog, where he discusses all facets of generative AI as an educator. He frequently includes sample prompts and practical steps as well as covering AI news in the broader industry.

mwolfenstein
Community Participant

This isn't about the content of this post, but I wanted to mention that unlike the food nutrition facts image which is fully leigble, the AI Nutrition Facts image at the top of this page has a very small font and when you click on it or full screen it after clicking on it it doesn't get any larger. On inspection the alt text is the file name which isn't even descriptive. I'm a slightly far sighted user as of the last couple of years, and I can barely make out the text on a fairly large desktop monitor.

As to the feature itself, it's a good idea for addressing what can be addressed within the relatively local context. Obviously there are massive ethical and environmental issues around how all of the popular models have been trained and in relation to their ongoing use and development, but I don't expect Instructure to address those.

As a final note for now, in the sample provided for PII it states, "PII in discussion replies may be sent to the model, but no PII is intentionally sent to the model." I know what this is trying to say, but the phrasing is extremely confusing and I recommend modifying it so that a less savvy user can understand that PII can be sent to  model based on the user generated content but that it doesn't send PII as a default behavior. My exact phrasing might be a bit off, but assuming I decoded the sample phrasing correctly I'm hoping you can see how it could be significantly clearer.

In general, kudos on taking this step!