COMMENTARY: “I Can’t Figure Out How to Write an AI Policy for My Syllabus,” or, How Student Use of AI is an Issue of Class Consciousness 

Note: The views expressed here are my own, not my employer’s. 


 1.

In late January—which, as of this writing in mid-February, feels like a lifetime ago—I attended a webinar hosted by MacMillan Publishing called “Taking AI Integration to the Next Level.” The webinar targeted faculty members seeking to use AI in their classrooms. The blurb for the webinar stated: 

This advanced session will explore practical strategies for gaining buy-in from reluctant colleagues, integrating AI tools into your syllabus, and managing classroom dynamics with ethical, accessible AI practices. Learn how to support students, streamline tasks, and foster inclusivity while preparing for the future of AI in education. 

One might surmise from this blurb what eventually was stated explicitly in the session: AI is “here, and we can’t do anything about that.” Indeed, one of the presenters said we need to shift from “should we use AI” to “how do we use it effectively?” A series of brief presentations on “effective” and “ethical” AI use in the classroom proceeded from this basic premise. 

Given the timing of this webinar, which was after the transition to the new federal administration but before it became public that the entirety of the federal budgeting infrastructure and Social Security were co-opted by a sociopathic tech oligarch who isn’t not a Nazi, I was surprised that there was no mention of the mutual embrace of the political right and the tech industry. (This embrace has always existed—see, e.g., Peter Thiel—but has often been overlooked). 

It also astonished me that someone could suggest there is such a thing as an “effective” or “ethical” use of AI without (1) providing a heuristic for evaluating efficacy or ethics, (2) acknowledging the history and political-economic context of the development and maintenance of genAI technology itself; and, (3) acknowledging that contemporary AI boosterism originates from people who demonstrably lack any ethical scruples whatsoever (e.g., Musk and Zuckerberg) and who are increasingly comfortable with outright fascism.

But, of course, like any publisher in education, MacMillan has ulterior motives when it comes to informing us about AI in our classrooms. Indeed, MacMillan has its learning management platform, Achieve, which integrates AI in several ways. So, a webinar that operates on the premise that “AI is here” shouldn’t be a surprise. Like other companies, MacMillan’s executives and investors—the capitalist class, not necessarily those employed by MacMillan—benefit from desperate instructors using AI tools to save time and cash-strapped universities using it to save money. 

But why are we all supposedly interested in “integrating” AI into our courses? The tech oligarchs—Zuckerberg, Musk, Bezos, Altman—are not promoting the pervasive use of AI because they think it will be good for society or because it is even possible to use it ethically. They do it because it will continue to enrich them beyond the wildest dreams of any one of our students sitting in their tiny dorm room using ChatGPT to write a paper … hoping they’ll earn a degree, find a job that pays their rent (the prospects of which are dwindling), and be able to pay back the enormous debt they’ve incurred (also verging on impossible). 

Those students, however, live in a world with actual material circumstances to which they are subject. As such, to dismiss the use of AI entirely may put them at a massive disadvantage in a world that is already rife with inequity and disparity (and that AI exacerbates but does so in ways made invisible to the individual student using it to gain a competitive advantage). So, as accurate as the exploitative nature of AI is, as real as its underlying politics are, and as precise as the environmental damage it incurs on the world is, so too is the experience of the students we teach, the increasingly uncertain political-economic circumstances in which they find themselves, and the ways they feel as though they need to respond. 

We know that, as the presenters in the webinar stated, students are using generative AI for several tasks. And, in a sense, it is true what the presenters said when they told us that AI was already “here.” The statistics on student use of AI vary widely, so it isn’t easy to get a clear picture of how frequently students use it and roughly what percentage of students use it with what frequency.

Unreliable resources abound, and surveys that lack a transparent methodology are easy to find. Chegg, whose ethical scruples and commitment to the future of higher education are deeply questionable, publishes a “Global Student Survey,” which suggests that 80% of university students have used AI. Of the students who have used genAI to “support their university studies,” 56% “say they input a question once a day or more.” Of the 203 students from the U.S. who reported using generative AI, 42% said that it “frees up” more of their time, and 55% said it helps them “learn faster.”

Directly next to these statistics is a set of survey data about “Mental health challenges faced as a student in higher education.” The rhetorical device of the layout of this report could not be more obvious: generative AI will free up your time so that you will no longer be one of the 46% of respondents experiencing academic burnout. 

My understanding of the frequency and range of student use of genAI is limited by the discipline(s) I teach—Graphic Design and User Experience Design—and the evidence that informs this understanding is anecdotal. Some students treat ChatGPT like a search engine, inputting a question they could otherwise have found better, more sophisticated answers.

Unfortunately, ChatGPT’s ability to conduct a basic web search is hindered by its demonstrable lack of “intelligence” regarding SEO and its role in hierarchies of search results, regardless of the credibility of such results. Other students use genAI to help them develop ideas for projects or to put together an overview of the literature on a given topic. I’ve noticed comparably fewer students using genAI to produce imagery for their projects, although that may change in the coming semesters. Overall, I sense that most of my students are using genAI, and its perceived utility derives mainly from their perception that it saves them time. 

Because I teach Graphic Design and UX, I am fortunate to have opportunities to engage with students qualitatively differently than many other university education domains. I am with no more than 20 students for roughly three hours per class period, two days a week. We get to know each other pretty well. Because studio-based design education is centered around students doing work and giving each other feedback in the classroom space, we have opportunities in ways my colleagues who teach big lecture classes do not. 

So, one-day last fall, I asked my students if their professors had AI policies in their syllabi. At first, my students responded to my query about AI policies mostly with nods. I asked a few follow-up questions. Did any professors have AI policies that barred the use of AI entirely? Some students said yes. Did the students think that was practical for the professor, and is that policy even possible to enforce? Probably not, especially given the genAI-plagiarism-detection arms race. Did any of the syllabi containing AI policies describe the underlying rationales for those policies, and what research supported those rationales? No. 

One might argue that we must take student responses to questions requiring carefully reading a syllabus with some “grains of salt.” But I’m inclined to give my students the benefit of the doubt. Why is it that at least some of these students’ professors have not included rationales explaining why their AI policy is what it is? But, realistically, given what we know about the dreaded and often unread syllabus, why would an already overworked faculty member bother turning a classroom policy into a research paper? 

A colleague who teaches a course on AI from Cultural and Media Studies perspectives chose to co-author his course’s AI policy with his students. The class settled on a heuristic of acceptable use: students were not allowed to use genAI to produce an essay whole cloth; students had to note when AI was used in their writing and research. They also determined the role of student responsibility when utilizing information provided by an AI service: students were responsible for all the content in their writing for the course, regardless of whether AI-generated it.

I appreciated this approach, but I’m not sure it’s adequate. Like the syllabi with AI policies that don’t offer reasons for those policies, it eludes some of the biggest problems with AI. Its focus remains on the individual student and reinforces the neoliberal ideology baked into the college experience: that students are atomized individuals trying to compete with one another in a high-stakes game that includes grades, jobs, extracurricular activities, and so on. 

My research and creative practice center on advanced computing, its interfaces, and politics. I have struggled to arrive at satisfactory answers to basic questions about student use of AI and the policies that should govern that use. What should be included in such a policy, and what are its parameters?

What are we—in any discipline—obligated to teach students about AI, and what are they obligated to know about it? And how does this differ from what they need to know about a pencil when they use it? Or a computer? What measure is AI a “tool,” and what should we and our students (of all disciplines) know about the tools we use? And how should an AI policy reflect the material circumstances our students are subject to (and which are certainly not their fault)? 

Let’s say you’re teaching a class on utopian literature that counts for a university writing requirement. And let’s say you’re an adjunct. You’re cobbling together jobs. Or maybe you’re one of the lucky few to be on the tenure track, and you’re saddled with service obligations as the department shrinks its staff support, and your dean (who is paid handsomely) says the department can’t afford it.

No matter, you’re under immense pressure: financial pressure to keep from having to live out of your car (as in the case of so many adjuncts today) or wildly unrealistic pressure to publish more and faster than anyone who has ever worked in your department before you (as is the case for many tenure-track faculty).

Meanwhile, your university is staring down the barrel of decreasing state appropriations, the exodus of federal funding from every aspect of its research portfolio, and ongoing administrative bloat. More students and more tuition dollars seem to be the only thing that can quell the bleeding, and that requires you and your colleagues to teach bigger classes with more students more often.

Even to the most sensitive and self-aware educator and scholar, AI might look helpful in times like this. It could write assignments for you. It could help with grading. Hey, it could write a lit review. It could save you precious time to commute to one of the four schools at which you are an adjunct, or it could save you time on your next journal article submission, which you always feel needs to happen yesterday. And just like I learned in another MacMillan webinar, “the challenge is that everything can be so time-consuming, and AI can help us enhance and streamline these efforts.” 

Meanwhile, you might have a student in your class who commutes from Detroit to East Lansing. They provide childcare at home and work multiple jobs to pay for school. The promise of AI’s convenience-enhancing and time-saving capabilities is equally attractive to this student as it is to you as an exhausted adjunct or stressed-out junior faculty member on the tenure track. They might use ChatGPT to help them develop ideas for class projects, to give them suggestions for journal articles to read about a given topic, or, yes, to write an entire paper if and when necessary. They are, like the game-theoretic simulations of humans on which most of our computing today is based, constantly making cost-benefit analyses about every aspect of their everyday lives, trying to figure out a way to win a game rigged against them. 

In other words, you and your students are in the same boat, and it is sinking fast. 

Faculty and students occupy this sinking boat, and we are encouraged to perceive our way out of the inevitable catastrophe through what Ulrich Beck called “biographical solutions to systemic contradictions.” Universities have succumbed to this logic in new and, frankly, mind-boggling ways, adopting the technocratic neoliberal ethos that understands all problems to be fundamentally technical and individual, even though those problems are always already political and social.

Metrics and analytics, increasingly granular and predictive, have dominated how we evaluate the success of our institutions of higher education. Such systems situate us in competition with one another and reinforce the game-theoretic simulations of human behavior that undergird these analytic systems’ development. Under a regime of computationally driven optimization, the urge to “save time” using generative AI is almost impossible to resist. 

However, the real benefits of using genAI tools do not accrue to the working class—including our students and us. Instead, those benefits accrue primarily to the capitalist class—the tech oligarchs so conspicuously absent from discussion in the MacMillan webinars I attended.

The international working class—especially folks in the Global South mining minerals that are used in batteries and capacitors, working in refineries that turn those minerals into valuable for electronics, or those who work on the massive cargo ships that traverse the oceans taking these metals from one part of the production process to the next as part of our distributed globalized network that forms the very infrastructure that makes AI possible—they don’t benefit from the use of AI. Any benefits that we or our students derive from using generative AI pale compared to the concentration of wealth and power in the hands of fewer and fewer people that an increasing reliance on AI facilitates. 

So, our shared experience—the sinking of our collective ship and the way AI becomes seen by both faculty and students as a way out of that ship, even as it accelerates the process of the ship’s sinking—is, at least from what I’ve seen, absent from classroom communication around AI as well as from the broader discourse on AI in higher education. 

To some extent, this essay is my effort to write my own “AI policy” for my syllabi. Given what I have written here, such a policy would be more of a statement of class solidarity than a set of rules about permissible uses of genAI and requirements for citation and transparency around AI use. Maybe that’s why I haven’t yet figured out how to write an AI policy.

Perhaps trying to build class solidarity with a policy in a syllabus students don’t always read is pointless. But maybe it is, following the writing of J.K. Gibson-Graham, a way to enact a different world, to perform a new kind of academia, one that is anti-colonialist, anti-fascist, and always in solidarity with the international working class. 

Last year, I met a graduate student in Engineering who uses advanced computational tools, such as Machine Learning and AI, to help mitigate vision loss for folks who are losing sight and help individuals with certain types of paralysis move their limbs. He was frustrated by my seemingly “anti-AI” stance because, as he saw it, the capitalist exploitation required to create the technologies we were working on was worth the possibility of transforming someone’s life for the better.

I offered some pushback, asking if the folks in the Democratic Republic of Congo who are mining Coltan—which is then refined to become the Tantalum used to create the capacitors that keep computers and servers from overheating—will also eventually benefit from the technology he is developing.

But I think my pushback was misguided. Helping people see or regain the use of their limbs is terrific. This is something about which I think we can all agree. This idea of agreement is essential—if we as a society could democratically determine the priorities for technological innovation, helping people see or walk would probably be near the top of that list.

But, and this is crucial, would we also be able to agree that it is essential to prioritize something like “not boiling freshwater organisms to death when water is returned to a stream after being used to cool a server ” or, perhaps, equitably remunerating the members of the international working class who do the labor that enables advanced computing to exist in the first place? 

In his 1973 book Tools for Conviviality, Ivan Illich writes that the “design criteria” for all “tools” should be democratically determined. Illich understands tools comprehensively here—tools that enable people to act intentionally in their world, ranging from highway systems to medical devices.

Today, technological innovation under capitalism is fundamentally anti-democratic. It serves to augment surplus value for the capitalist class. But, under different political-economic circumstances, perhaps some variation of advanced computing could exist, the design criteria for which would account for its impact on labor and the environment, and the functionality of which could be prioritized by the people impacted by its development and use. It isn’t easy to imagine what that might look like. But maybe, given the desperation of our moment, the AI policy in our syllabi is as good a place as any to start.  

_____________

Cover photo courtesy MLive.com

Leave a Reply