Generative AI threatens to tear final shreds from tapestry of critical thinking
The rise of generative AI—large language models (LLMs) such as ChatGPT, Claude, and Gemini—has been both a blessing and a curse, and it’s easy to see why. On one hand, the idea of a personal expert in almost every field and in your pocket is something most people could only dream about 10 years ago. Yet on the other hand, misuse, whether benign or malignant, runs rampant and largely takes root in the academic field. The use of generative AI to cheat has evolved, and now LLM agents are completing entire online courses in a matter of minutes. If proper control systems and educational systems are not put in place, academic dishonesty regarding the use of generative AI will continue to grow, and it will not take long to see its effects on our society.
While generative AI has many practical benefits, how it is applied falls to the user, and it is increasingly being used to cheat in schoolwork. For the most part, this used to involve having the LLM write an essay, and after a few minor touchups, directly turning in its output. This is already dangerous on its own. Using LLMs to write essays completely bypasses the need to think and critically reason, a core skillset in every field. It is scary to think that soon, for example, med students who used LLMs could become doctors who lack the critical thinking skills required to properly treat and diagnose patients. Cheating using generative AI has only gotten progressively worse. Advancements in LLMs have led to the creation of new, more powerful models that sound more and more like humans, and are reaching points indistinguishable from a real student. With this, agentic AI has emerged. Agentic AI uses generative AI models, usually Claude or ChatGPT, to autonomously ‘reason’ and complete tasks for the user, without human input. These tools give LLMs free reign to entire desktop computers, and they will do whatever it takes to get a task done. Just like chatting with an LLM, there are genuine use cases for agentic AI. Simple tasks like reading or flagging emails, or even debugging programs, can be automated using agentic AI, gifting more time to the user to do the things that matter. Again, however, like with any tool, there is bound to be malicious use, a siren’s song of plagiaristic temptation.
Recently, agentic AI has made an impact in the academic scene. Students are creating agents, often using readily available tools online, that will log into a learning management system like Canvas, and automatically download, complete, and upload assignments in a matter of minutes. This poses a brand new threat, and left unchecked, could be devastating to not only the academic field but the workforce as well. Reasoning is even more detached than when simply using AI to write an essay, and in most cases, the user does not need to examine the agent’s work, eliminating the last chance for a student to learn. Using agentic AI to complete school work also poses problems for other students. Many instructors are faced with an increased workload due to large amounts of incoming AI-generated work, and they are dangerously incentivized to catch and punish students using generative AI, often at the expense of honest students.
Yet, there is no need to be frightened by the rise of generative AI. If the right precautions are put in place, its use as a tool can shine. This primarily takes the form of education, of both the instructors’ grading and the students using generative AI. Instructors must be given education on how LLMs work, and the quirks and tells that result from it. Furthermore, students need to be educated similarly as well. Ultimately, students need to understand that generative AI lacks real reasoning skills and that its output will always fail to meet the standards of instructors who can recognize the pitfalls that this technology falls into. The understanding that generative AI is incapable of, and cannot replace, human reasoning and ingenuity is imperative to a functioning future with AI. Without this understanding, people will use AI as a weapon, not a tool, generating a society of thoughtlessness and complacency.
by Michael Hotter
Published March 20, 2026
Oshkosh West Index Volume 122 Issue VI