Artificial Intelligence (AI) is not what our education system needs; as it is, it threatens to simply replace the problems of our campuses with new ones. With the California State University (CSU) system receiving a new AI initiative around the same time as significant budget cuts, it is raising concerns that AI will replace our education system and the people holding it up, when what they need is more financial support, not less.
Large Language Models (LLM) like ChatGPT have been publicly available for years at this point, but they are still far from perfect, bringing with them several issues like a decline in critical thinking and AI-propagated misinformation. While I think a total ban on AI is impractical and, frankly, unrealistic, it must be reigned in and controlled; the many downsides of AI threaten to do more harm than good, especially in classrooms.
One example of how AI can harm educational outcomes is its use to supplement the act of critical thinking. The purpose of assignments in education is to learn through doing: figuring out how to apply the concepts you studied. Students won’t learn as well if an AI does the thinking for them.
In an assembly joint hearing on the matter of the AI initiative, Elaine Villanueva Bernal, California Faculty Association (CFA) associate vice president of lecturers, stated that AI can undermine the goal of teaching critical thinking to students.
“If students come to believe they can outsource analysis and reasoning to an algorithm, they lose the very habits of questioning and reflection that are the core of a university education,” Bernal said.
Additionally, when interviewed on the subject of AI in education, Skyline College English Professor Michael Cross said that the biggest concern with AI in the classroom is students using it to simply do all the work for them.
“I think that the problem with AI in the classroom is that students are using it for the comprehensive project and they’re using it for every step of the project, which I think is the primary concern,” Cross said.
Cross further clarified that he thinks AI can have valuable uses in the earlier stages of projects, but not in the later, more critical-thinking-heavy stages that involve developing and supporting arguments, and problem solving.
The issue with this is that, unlike human educators, AI has nothing preventing it from just doing an entire assignment for a student — skipping the part where they develop their critical thinking skills. Even if there was an LLM with guardrails so it gave guidance without solving problems outright, it would take serious legislation to clamp down on other easily-accessible ones that are less limited.
In a similar vein, there are also concerns about the reliability of the information LLMs provide. LLMs have been documented making up information to fit prompts, typically referred to as “hallucinations,” which calls into question the validity of any information they provide before it’s checked by a human. Even when they are providing real information, it is still dangerous to assume that what you’re being given is completely true or accurate, as LLMs have also been documented pulling information from anywhere regardless of what kind of source it is. Peer-reviewed studies, opinion pieces, and random internet comments are treated the same in an effort to find information to answer their prompt; again, meaning it’s untrustworthy unless verified by a human.
Cross said as much on the topic, stating that what the LLMs believe are true may not actually be, due to the aforementioned hallucinations and lack of verified sources.
“People don’t understand what is often referred to as AI hallucination, that AI makes things up all the time. I think way more than people believe that it does,” Cross said. “And oftentimes the sources that AI is pulling from are not juried or verified sources in the first place. So they might be providing information that the AI chatbot or the large language model believes is true, but it might not be.”
Bernal goes further in depth on the downsides of AI in their full testimony, which can be found on the CFA website.
For all these faults, I don’t believe it’s plausible to simply ban all AI, but I do think there needs to be a shift in focus to the different ways in which students and faculty can be helped outside of just AI. Educators in the United States are famously overworked and underpaid, which can make it difficult to teach as effectively as one might want. This is an issue that AI aims to solve through taking that workload, but another way to address it is to simply provide more funding for our education system, allowing more educators to be hired so they can have less work each.
Cross echoed this sentiment, saying that the amount of students he’s working with has made it very difficult to engage in one-on-one teaching.
“I currently am working with 100 plus students, and to teach the way that I want to teach, which is a lot of one-on-one engagement with students, is nearly impossible to do when you’re dealing with that many students,” Cross said.
Cross further explained that he believes material and structural improvements to the education system are a good starting point, but the question of how we decide to let AI affect us is an inevitability that will have to be answered at some point.
My answer is that AI needs to be a tool that broadens the range of what humans can do: performing the tasks humans are less capable of, not replacing them in their daily lives. While I have no doubt that there are applications for AI in highly-advanced areas — I’ve heard it has great potential in cancer detection, for example — I also believe that humans are the choices they make, and the more we outsource our thinking to something else, the less human we become. We should prioritize the maximization of human intelligence before relying on artificial intelligence, and that starts with better-funding our education system.
