COMMENTARY: Should We Be Worried About AI in Schooling? 

Worry? How different is our fear of AI chatbots from what we’ve already managed to do in the name of “educating” today’s youth? Answer: Not much.

With the emergence of ChatGPT, the buzz on social media and elsewhere has emphasized the potential damage AI may cause to education. Students, so the narrative goes, will bring up an intelligent chatbot, ask a few questions, and, voilà, an uninspiring, formulaic mid-term essay gets a B grade.

I, too, have some concerns about how AI may change the nature of education, but that isn’t one of them. The reason is straightforward: it’s not new.

“Wait a minute,” you’re thinking, “these chatbots are recent, and it’s only now that this level of AI technology has become generally available. Are you daft?” Perhaps that is right, but from my perspective as a professor, our schools have been programming students to write uninspiring, formulaic essays for years. So rather than calling them “students,” let’s call them StudyBots because (sadly) that seems to be how they are viewed in the schools.

After twelve or more years of programming (creating a StudyBot is a big job), many arrive at college ready to take on the goal they’ve been programmed to attain their whole lives. It’s time to get a college degree! They arrive in Professor Traphagan’s class on the first day of the first semester, a big class with 100 students and two teaching assistants (TAs). I begin by discussing my expectations and going over the syllabus. I typically let them go early on the first day, asking them to read over the syllabus and bring questions to the next meeting. Then, Day Two usually goes something like this….

“Good morning, class. “Do you have any questions about the course and syllabus?” There’s a brief silence, and then a few hands go up. I point at one StudyBot. “Professor, where can we find the learning objectives for the class?” I say, “There aren’t any. My only concern is that you think and will undoubtedly learn something from thinking and interacting with the rest of the class and the materials we read.”

Silence. At this point, a message appears over the heads of at least 50% of the class: ERROR: Unable to install the application.

A few more hands go up, and I call on another StudyBot who asks, “If there are no learning objectives, how will we know what we need to learn?” (Me again) “If you come to class regularly, do the readings, and engage in the class discussions, you’ll have the knowledge and skills necessary to do well on the essays. Everything is open book, so there’s no need to memorize. I’m interested in ideas, how you work with ideas and information, and how you integrate your thoughts into the themes of the class.”

This usually causes several more error messages to appear, which vary slightly. I might see repeat application install errors, but I will also see things like ERROR: The Application StudyBot is not open anymore, which indicates that the application is unresponsive, perhaps due to compatibility problems with the newly installed software. Or, ERROR: The .dll file not found, telling me that an important piece of information is missing, thus preventing the StudyBot from initializing or processing commands.

A few in the class will smile and enjoy the exchange. Then, amidst the general stunned silence, another hand will arise, “Professor, where can we find the rubrics for the essays?” “Rubrics? There are no rubrics.” More error messages pop up. “How will we know how points will be deducted from our essays? How will we know what it takes to get an A or a B?”

“I don’t grade that way,” I say. “I grade holistically. You can look at the syllabus to see how I think about what constitutes an A, B, C, etc., but each paper will be a little different. One paper might be technically perfect from a writing perspective but uninteresting in the argument and will probably be graded a B. Another paper might have some mechanical issues but present a brilliant argument that challenges me to think in a new way. That’s likely an A. I don’t assign points to these categories because that restricts my ability to think about and assess the essay.”

No more hands go up, but the error messages continue to appear throughout the classroom. Several StudyBots will have a SMART Hard Disk ERROR 301, indicating they’re experiencing potential failure even though it is only the first week of class. Many of those Bots will contemplate dropping the class as soon as possible. Others will hold on, thinking that even if the System Administrator (namely me, the professor) is a dunce, there’s hope (perhaps) that the Help Desk (TAs) will be good. The TAs look nice and are much younger than the dunce. And since the class meets Gen Ed requirements, it’s probably worth getting it over with. “Damn,” they think, “I hope all System Administrators in college aren’t like this idiot.”

As I look at all the error messages popping up, I usually add data to help them process the input. “You see, it’s important to understand that grading essays is quite subjective, despite what you were taught in school. Rubrics try to make it appear like the grading is objective, but it’s impossible to be truly objective when grading the inherently subjective qualities of writing and argumentation. And (um) keep in mind that there are no objective grades for essays, anyway. There will be some variation across classes because it’s difficult to read a large group of essays without comparing them to some extent.”

Depending on the operating system, this usually brings up the Blue Screen of Death or the Spinning Beach Ball of Death over several heads. These StudyBots have frozen. Once they reboot, they’ll drop the class, convinced that the dinosaur System Administrator probably does everything in FORTRAN!

By the end of the first week, there will be a core group of students, many 0f whom are smiling and open to rejecting or reconsidering the programming they’ve received throughout their K-12 experience. There also will be quite a few who, despite the obvious lunacy of the System Administrator, decide to stay the course. A few of these will show up again at the end of the term, emailing me to ask how I “deducted points” on the exam to arrive at the grade they received. In some cases, this will go back and forth as I again explain that there’s no rubric and I don’t “deduct points,” but these StudyBots won’t be able to process the information.

There will be more error messages.

Perhaps the most unfortunate aspect of the U.S. educational system these days is that it’s focused on churning out StudyBots able to get good grades. That differs from students who can think creatively and learn without relying on falsely objective rubrics and arbitrary goals that fail to capture the inherently individual nature of what we learn and how we process information. In other words, our schools leave students unprepared to process information effectively unless it adheres to formulaic standards.

Students come to college today largely believing that learning is about following a formula. They’re programmed to write the “four-paragraph essay.” They’re programmed to know how many sentences should be in each paragraph. They’re programmed to write formulaic, uninspiring thesis statements. And they’re programmed to expect everything to be graded on an “objective” point system in a rubric that privileges adherence to metrics over the pursuit of quality. That, in short, is why our education system is failing. It prioritizes conformity to formulas over creativity and innovation. The type of essay a chatbot produces is not the bane of our current educational system—it represents the ideal.

If we focused on quality of learning, creativity, and critical thinking in our schools, rather than forcing teachers to teach to standardized tests and reducing education to a collection of empty metrics, chatbots would not be a problem. Creative young minds would find innovative ways to use the new technology while continuing to learn.

The most troubling thing I see in education today is not the rise of chatbots. It’s the suppression of so many creative and intelligent young minds that have been programmed to conform. Then, in college, they fight through error messages generated by the StudyBot mentality foisted upon them throughout their educational lives.

Thankfully, some manage to reboot when they get to college. But the chatbot approach to education that characterizes much of what seems to happen in K-12 is generally ineffective. It doesn’t help students learn how to learn, and it is destructive to a diverse society where everyone needs to know how to engage in complex ideas and beliefs with intellectual dexterity and creativity.

Leave a Reply