Found this content helpful? Log in or sign up to leave a like!

IgniteAI Rubric Generator feedback

JamesMacaulay
Community Explorer

Hi there,

We've been testing the IgniteAI rubric generator feature preview, and I have feedback to provide.

  • This tool seems to output in American English. Need to align the output to localised language where possible.

  • The generated rubrics are somewhat generic and based on the detail in the assignment text area.
    QUT Context: we've asked academics to include a separate details page so that submission portal access can be controlled (group assignments in particular), so it would require pasting the assessment task details into a 3-line text field.

  • To save with some of the configuration and help with ensuring consistent numeric values, the rubric performance bands should be able to be aligned to the account/course grading scale. Otherwise, the tool picks values to fit that need to be corrected each time.
    QUT Context: it would require coordinators to go and manually fix a lot of the values, which is where we see academics make mistakes that don't get picked up until they are in marking and it's too late.

  • Additionally to the above, regenerating a criterion resets numeric values for performance levels.

  • Reading task details from the assignment description seems to override some of the settings. For example, if the task description includes 6 sub-points for the task, it tended to generate 6 criteria, even when other values are selected in the settings. Inconsistent, but repeatable.

  • For a higher ed institution, we would want to turn off the K-12 levels of generation to lock it to Higher Education only. Ideally, we would like to see some form of introductory, developing, mastery and/or undergraduate/postgraduate to better align the language to HE levels.

  • The generated rubrics tended to undershoot what we would consider meeting the learning outcomes at what it labeled as a passing grade for many of the generated criteria.
    QUT Context: our guidance for a pass should demonstrate that a student has met the unit learning outcomes, and the generated rubric can read like a pass falls short.

We will continue testing and look forward to future developments on this tool.