There were two items about automated feedback in the news last week, both of which were extremely sad to read in my opinion, and I thought it would be worth sharing some thoughts here:
Why it’s so troubling that we allow computers to grade test essays by Valerie Strauss and Julie Rine in the Washington Post
This article is about yet another attempt to have computers "assess" student writing. The absurdity of such efforts has been well documented elsewhere, but if you are not up on the topic, this article is a good place to start. Yes, machines can provide grades comparable to the grades that humans assign, but that is simply because humans themselves are grading like machines, a process that works much better when the students in turn write like machines, producing the words and phrases and sentences that are expected of them in formulaic fashion.
My hands hovered over the keyboard as my brain caught up to what my fingers had just typed. I couldn’t quite believe I had just told this to a student: “The computer won’t know that this fragment works as part of your style. It will just see a sentence fragment and most likely will ding you for it.”
I was referring to the fact that computers are now grading essays on our state’s Institutes for Research assessments.
Even before computers took over, these exams were never looking for writing that demonstrated a unique voice and style, but rather writing that included enough elements on a checklist for the assessor to deem the text “proficient.”
I have very strong feelings about this: if you do not have a meaningful purpose for student writing so that some audience (you, other students, family, friends) will read the student's writing with pleasure, then just give the students a test instead, something suitable for machine-scoring with right/wrong answers. We should not use writing as a proxy for testing. Instead, we should use writing in order to communicate something of importance, person to person. If the writing has no meaning to the student or to the class or to you, the teacher, it's not a good solution to have a machine to "read" the writing and assess it; quite the contrary: I can think of no better way to convince students that their writing has no real value for anyone.
Pearson Embedded a 'Social-Psychological' Experiment in Students' Educational Software by Sidney Fussell in Gizmodo (the story was also written up by EdWeek and elsewhere; this happens to be the article I read first)
The Pearson story was a question about the ethics of using student performance for research purposes without the students knowing that they were research subjects; what concerned me was the study itself, which was testing the effects of totally automated feedback about student performance: when students got a question wrong, they received either "growth mindset" feedback messages (e.g. "No one is born a great programmer. Success takes hours and hours of practice") or "anchoring of effect" feedback messages (e.g. "Some students tried this question 26 times! Don’t worry if it takes you a few tries to get it right"), or no "special messages" at all.
The messages apparently had no positive effect, which does not surprise me: there is nothing "special" about getting automated feedback messages. In fact, it seems that these messages may have in some weird way discouraged the students from attempting more problems (see the article for details). I would call it the Microsoft-Clippy effect: don't bother me! I'm working!
In a separate blog post, I'll write up something about an effort I'm making in my classes to do a better job with audience feedback about student writing. Real comments from a real audience that, I hope, can be really useful as students work on their writing. More on that in a bit.
Meanwhile, here is some Clippy humor. Yes, I'm old. There are probably some people reading this who are too young to remember Clippy ha ha. Lucky you! He has his own Wikipedia article: Office Assistant.