In February, Ella Stapleton, then a senior at Northeastern University, noticed something unusual while reviewing notes for her organizational behavior course. She suspected her business professor had consulted ChatGPT.
Midway through a document prepared for a lesson on leadership models, an instruction directed to ChatGPT read: “Expand all areas. Be more detailed and specific.” Following this were lists of positive and negative leadership traits, each defined plainly and supported with bullet-pointed examples.
Stapleton texted a classmate, asking if she had seen the notes posted on Canvas, the university’s course management system, and suggested they were generated using ChatGPT.
Her classmate responded with disbelief, "Oh my God, what the heck?"
Curious, Stapleton examined other materials from her professor and identified additional signs of AI involvement: distorted text, images of office workers with odd body parts, and glaring spelling mistakes.
She was disappointed. Given the university’s cost and reputation, she expected a higher standard of education. The course was a required component of her business degree, and the program explicitly prohibited academically dishonest behaviors, including unauthorized use of artificial intelligence or chatbots.
“He tells us not to use it, then uses it himself,” Stapleton lamented.
Stapleton filed a formal complaint with Northeastern’s business school, citing undisclosed AI use and other concerns about her professor's teaching style. She also requested a tuition refund for the course, which represented about a quarter of her semester’s fees, amounting to over $8,000.
When ChatGPT became publicly available at the end of 2022, it sparked widespread alarm across educational institutions due to how easily it facilitated cheating. Students could have the AI write history papers or literary analyses within seconds. Some schools banned the technology outright, while others adopted AI detection tools despite uncertainties about their accuracy.
However, the situation has evolved. Students now express frustration on platforms like Rate My Professors about their instructors’ overreliance on AI. They scrutinize course content for overused phrases typical of ChatGPT, such as “crucial” and “delve deeper.” Beyond accusations of hypocrisy, students argue an economic point: they pay substantial tuition to learn from human educators, not algorithms they could access for free.
In contrast, educators defend their use of AI chatbots as tools to enhance teaching quality. Many say these technologies save time, help manage overwhelming workloads, and serve as automated teaching assistants.
Usage rates among professors are rising. A national survey last year of over 1,800 higher education instructors found that 18 percent were frequent users of generative AI tools; this figure nearly doubled in a follow-up survey this year, according to consulting firm Tyton Partners. The AI industry is eager to support and profit from education, with startups like OpenAI and Anthropic launching enterprise chatbot versions designed specifically for universities.
It is evident that generative AI is here to stay, but universities are struggling to keep pace with evolving policies. Now, professors are navigating a steep learning curve, balancing technological challenges with student skepticism.
Grading in the Age of AI
Last fall, Marie, a 22-year-old student at Southern New Hampshire University, submitted a three-page essay for an online anthropology course. She was pleased to see a high grade posted on the school’s platform. However, in the comments section, her professor had accidentally shared an exchange with ChatGPT, including the grading rubric the chatbot was asked to apply and a request for “really nice feedback” for Marie.
“From my perspective, the professor didn’t even read anything I wrote,” said Marie, who requested anonymity and asked that her professor’s identity remain confidential. She understood the temptation to use AI, noting that teaching at the university often amounted to a “third job” for faculty managing hundreds of students. She did not wish to embarrass her professor.
Still, Marie felt wronged and confronted her professor during a Zoom meeting. The professor acknowledged reading student essays but explained that ChatGPT was used as a guide, a practice permitted by the school.
Robert MacAuslan, Vice President of AI at Southern New Hampshire, stated the university believes “in the power of AI to transform education” and has established guidelines for both faculty and students to “ensure this technology enhances, rather than replaces, human creativity and oversight.” Faculty are prohibited from using tools like ChatGPT and Grammarly “in place of authentic, human-centered feedback.”
“These tools should never be used to ‘do the work’ for students,” MacAuslan explained. “Instead, they can be seen as enhancements to their existing processes.”
After a second professor appeared to provide ChatGPT-generated feedback, Marie transferred to another university.
Paul Shovlin, an English professor at Ohio University in Athens, sympathized with Marie’s frustration. “I’m not a big fan of that,” he said after hearing her experience. Shovlin is also part of the university’s AI faculty team, tasked with developing appropriate ways to integrate AI into teaching and learning.
“Our value as instructors lies in the feedback we give to students,” he said. “It’s the human connections we build—being the person reading their words and reacting to them—that matter.”
Shovlin supports incorporating AI into education but believes it should not simply make life easier for instructors. Students must learn to use the technology responsibly and develop an “ethical compass” around AI, since it is likely to be part of their future workplaces. Misuse could have serious consequences. “If you mess up, you could lose your job,” he warned.
For example, in 2023, Vanderbilt University’s education faculty sent an email calling for community unity after a mass shooting at another campus. The message promoted a “culture of care” and “building strong relationships,” but included a disclaimer revealing it was generated by ChatGPT. After students criticized the use of a machine to express empathy, involved officials temporarily resigned.
Shovlin noted that setting rules is challenging because acceptable AI use depends on the subject matter. The Center for Teaching, Learning, and Assessment, which he is part of, has established “principles” for AI integration that avoid a “one-size-fits-all” approach.
Interviews with dozens of professors whose students mentioned AI use in online reviews revealed mixed approaches. Some used ChatGPT to create programming assignments and quizzes, even when students complained about nonsensical results. Others employed it to organize or soften their feedback. Faculty members said their expertise helps them identify when AI generates inaccurate or fabricated information.
There was no consensus on acceptable use. Some acknowledged using ChatGPT to assist with grading; others criticized the practice. Some stressed transparency with students about AI use, while others withheld disclosure due to student skepticism.
Most considered Stapleton’s experience—where her professor used AI to generate notes and slides—acceptable, provided the instructor reviewed and edited the content to reflect their expertise. Shovlin compared this to the common academic practice of using third-party materials like lesson plans and case studies.
“Calling a professor ‘some kind of monster’ for using AI to create slides seems ridiculous to me,” he said.
An AI-Enhanced Teaching Assistant
Shingirai Christopher Kwaramba, a business professor at Virginia Commonwealth University, likened ChatGPT to a time-saving partner. Tasks that once took days to prepare now take hours. For example, he uses it to generate datasets of fictional chain stores for students to apply statistical concepts.
“I see it as a calculator on steroids,” Kwaramba said.
He added that the extra time gained allows him to dedicate more attention to students during office hours.
Other educators, like David Malan of Harvard University, noted that AI use has reduced the number of students seeking individual help during office hours. Malan, a computer science professor, integrated a custom AI chatbot into a popular introductory programming course. Hundreds of students use the chatbot for homework assistance.
Malan refined the chatbot to provide guidance rather than full answers. In a 2023 survey—the first year it was offered—most of the 500 students found it helpful.
“Instead of spending time on more mundane questions during office hours, my teaching assistants and I focus on weekly lunches and hackathons—more memorable and engaging experiences,” Malan said.
Katy Pearce, a communication professor at the University of Washington, developed a personalized AI chatbot trained on her past graded assignments. It now offers students feedback on their writing anytime, day or night, benefiting those hesitant to ask for help.
“Will there come a time when AI can perform much of the work graduate teaching assistants currently do? Absolutely,” Pearce said.
She added this shift raises concerns about the future pipeline of professors emerging from teaching assistant roles.
“That will certainly be a challenge,” Pearce noted.
A Learning Opportunity
After filing her complaint, Stapleton met several times with Northeastern business school officials. In May, the day after her graduation ceremony, she was informed her tuition refund request would be denied.
Rick Arrowood, her professor, expressed regret over the incident. An adjunct professor with nearly two decades of teaching experience, Arrowood admitted uploading his class files to ChatGPT, the AI search engine Perplexity, and an AI presentation tool called Gamma to “give them a fresh look.” At first glance, he said, the AI-generated notes and slides appeared excellent.
“In hindsight, I wish I had reviewed them more carefully,” he said.
He posted the materials online for students to review but emphasized he did not use them during class, preferring discussion-based lessons. He became aware of the flaws only after school officials raised concerns.
The embarrassing episode made him realize instructors should exercise greater caution when using AI and disclose its use to students. Northeastern recently implemented a formal AI policy requiring attribution when AI tools are employed and review of results to ensure “accuracy and appropriateness.” A university spokesperson confirmed the institution embraces AI to enhance teaching, research, and operations.
“My role is teaching,” Arrowood said. “If my experience serves as a lesson for others, then that’s my silver lining.”
0 Comments
No comments yet. Be the first to comment!