As has been noted, AI is being used to cheat. A lot:
Lee said he doesn’t know a single student at the school who isn’t using AI to cheat. To be clear, Lee doesn’t think this is a bad thing. “I think we are years — or months, probably — away from a world where nobody thinks using AI for homework is considered cheating,” he said.

Clio, the Muse of History
He’s stupid. But that’s OK, because he’s young. What studies are showing is that people who use AI too much get stupider.
The study surveyed 666 participants across various demographics to assess the impact of AI tools on critical thinking skills. Key findings included:
- Cognitive Offloading: Frequent AI users were more likely to offload mental tasks, relying on the technology for problem-solving and decision-making rather than engaging in independent critical thinking.
- Skill Erosion: Over time, participants who relied heavily on AI tools demonstrated reduced ability to critically evaluate information or develop nuanced conclusions.
- Generational Gaps: Younger participants exhibited greater dependence on AI tools compared to older groups, raising concerns about the long-term implications for professional expertise and judgment.
The researchers warned that while AI can streamline workflows and enhance productivity, excessive dependence risks creating “knowledge gaps” where users lose the capacity to verify or challenge the outputs generated by these tools.
Meanwhile, AI is hallucinating more and more:
Reasoning models, considered the “newest and most powerful technologies” from the likes of OpenAI, Google and the Chinese start-up DeepSeek, are “generating more errors, not fewer.” The models’ math skills have “notably improved,” but their “handle on facts has gotten shakier.” It is “not entirely clear why.”
If you can’t do the work without AI, you can’t check the AI. You don’t know when it’s hallucinating, and you don’t know when what it’s doing isn’t the best or most appropriate way to do the work. And if you’re totally reliant on AI, well, what do you bring to the table?
Students using AI to cheat are, well, cheating themselves:
It isn’t as if cheating is new. But now, as one student put it, “the ceiling has been blown off.” Who could resist a tool that makes every assignment easier with seemingly no consequences? After spending the better part of the past two years grading AI-generated papers, Troy Jollimore, a poet, philosopher, and Cal State Chico ethics professor, has concerns. “Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate,” he said. “Both in the literal sense and in the sense of being historically illiterate and having no knowledge of their own culture, much less anyone else’s.” That future may arrive sooner than expected when you consider what a short window college really is. Already, roughly half of all undergrads have never experienced college without easy access to generative AI. “We’re talking about an entire generation of learning perhaps significantly undermined here,” said Green, the Santa Clara tech ethicist. “It’s short-circuiting the learning process, and it’s happening fast.”
This isn’t complicated to fix. Instead of having essays and unsupervised out of class assignments, instructors are going to have to evaluate knowledge and skills by:
- Oral tests. Ask them questions, one on one, and see if they can answer and how good their answers are.
- In class, supervised exams and assignments. No AI aid, proctors there to make sure of it, and can you do it without help.
The idea that essays and take-home assignments are the way to evaluate students wasn’t handed down from on high, and hasn’t always been the way students’ knowledge was judged.
Now, of course, this is extra work for instructors and the students will whine, but who cares? Those who graduate from such programs (which will also teach how to use AI, not everything has to be done without it), will be more skilled and capable.
Students are always willing to cheat themselves by cheating and not actually learning the material. This is a new way of cheating, but there are old methods which will stop it cold, IF instructors will do the work, and if they can give up the idea, in particular, that essays and at-home assignments are a good way to evaluate work. (They never were, entirely, there was an entire industry for writing other people’s essays, which I assume AI has pretty much killed.)
AI is here, it requires changes to adapt. That’s all. And unless things change, it isn’t going to replace all workers or any such nonsense: the hallucination problem is serious, and researchers have no idea how to fix it and right now there is no US company which is making money on AI, every single query, even from paying clients, is costing more to run than it returns.
IF AI delivered reliable results and thus really could replace all workers. If it could fully automate knowledge work, well, companies might be willing to pay a lot more for it. But as it stands right now, I don’t see the maximalist position happening. And my sense is that this particular model of AI, a blend of statistical compression and reasoning cannot be made to be reliable, period. A new model is needed.
So, make the students actually do the work, and actually learn, whether they want to or not.