"A gigantic public experiment that no one has asked for"
AI is a disruptive force in education. Generative AI tools like OpenAI's ChatGPT and Google's Gemini can be easily used by students to cheat. These tools can produce papers in seconds that previously took students days or weeks to complete. A report by OpenAI found that the most common use for ChatGPT by college students was "starting papers."
But even when AI is not used by students unethically, its impact on learning is uncertain. It is impossible to know the long-term impact of these very new tools on students. But some early studies of AI use show a negative impact on critical thinking and cognition.
Nevertheless, the companies behind popular AI tools are making an aggressive push to make their products a fundamental part of K-12 and higher education.
At a March 2025 roundtable discussion, Leah Belsky, OpenAI’s vice president of education, says the company wants to "enable every student and teacher globally to access AI." Belsky says that she envisions a future where "AI is as integral to the university's experience as accessing the library or jumping onto the internet." Chris Lehane, another top OpenAI executive, told the New York Times that the four fundamental pillars of education should be "[r]eading and writing and arithmetic and learning how to use AI."
In April, OpenAI announced free student access to its advanced tools through the end of May. Google responded by offering free student access through the end of the 2026 academic year.
By habituating the use of AI tools among young people, these companies are hoping to create lifelong customers. Lured by the vast resources of AI companies, some large educational institutions are happy to help.
The Cal State system announced this year it will offer free access to ChatGPT "to more than 460,000 students across its 23 campuses." Duke University began providing free ChatGPT to its undergraduate students in June. Miami-Dade County Public Schools is offering Google Gemini to its 100,000 high school students.
OpenAI has also invested in billboards around college campuses encouraging students to use its product. According to the company, it is OpenAI's "first scaled marketing campaign," illustrating the importance of capturing the student market.
Meanwhile, the American Federation of Teachers, one of the nation's largest teachers unions, has accepted $23 million in funding from Microsoft, OpenAI, and another AI company, Anthropic. The money will be used to "open the National Academy for AI Instruction in New York City." The new entity will train teachers "on how to use AI tools for tasks like generating lesson plans."
"This is a gigantic public experiment that no one has asked for," said Mark Watkins, a researcher at the University of Mississippi who studies the intersection of artificial intelligence and education.
Popular Information does not just break news; it creates change. This year, our reporting:
Forced the Social Security Administration to abandon its plans to eliminate phone service.
Helped convince a federal court to reject Republican voter suppression efforts in North Carolina.
Was cited seven times in a federal lawsuit that cut off DOGE’s access to sensitive data about federal employees.
We have no corporate overlords or billionaire benefactors. You can support our work by upgrading to a paid subscription:
The science of AI and learning
Although familiarity with AI tools will likely be a useful skill for students when they transition into the workforce, there are serious questions about the impact of pervasive AI use in an educational setting.
One study from Microsoft and Carnegie Mellon University found that using AI can result in reduced critical thinking. “Specifically, higher confidence in [Generative AI] is associated with less critical thinking, while higher self-confidence is associated with more critical thinking,” the study found. The survey of 319 knowledge workers who use AI at work found that AI “shifts the nature of critical thinking toward information verification, response integration, and task stewardship.”
The study also found that, over time, users could become dependent on the tool, leading to less problem-solving skills. The researchers explained that “by mechanising routine tasks… you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise.”
Another study from the MIT Media Lab found similar results. The study subjects were asked to write SAT essays, with one group using OpenAI’s ChatGPT, one using Google, and one using only their brains. The subjects that used ChatGPT “displayed the weakest connectivity.” Subjects that used ChatGPT seemed to absorb less of the information, and “struggled to accurately quote their own work.”
Over the course of the study, the subjects using ChatGPT also got “lazier with each subsequent essay.” A separate study that researched the effects of AI on learning reached a similar conclusion, noting that “AI technologies such as ChatGPT may promote learners’ dependence on technology and potentially trigger metacognitive laziness.”
The White House goes all in on AI and education
In late April, President Trump signed an executive order aimed to encourage teachers to “utilize AI in their classrooms to improve educational outcomes,” claiming that AI will “[spark] curiosity and creativity.”
The executive order, which came just days after China announced a similar initiative on AI in education, instructs the secretaries of the departments of labor and education to provide federal resources for educator training, AI apprenticeships, and increased AI tools in classrooms.
The White House has recruited private technology companies to help implement its AI education agenda. On June 30, Trump announced that 68 companies had signed a “Pledge to America’s Youth” to give educators funding, technology, and curriculum for AI use in classrooms.
Education Secretary Linda McMahon touted the pledge as a way to “leverage AI in classrooms” and prepare students and teachers to use AI “responsibly in education.” But the companies who have signed on to the pledge — such as Google, Amazon, OpenAI, Microsoft, Meta, and Salesforce — all have a financial incentive to maximize AI use. Notably, neither the pledge nor the April executive order call for input from educators.



AI use in schools fits right into the authoritarian playbook of 'keep them ignorant'. Critical thinking skills are crucial in developing solutions to problems. Problem solving is essential to improve the lives of everyone everywhere. When the populace uses these skills, they begin to question the dissatisfactory decisions of those in charge, essentially what a fascist dictator fears.
Yeah, it feels like we're going to get AI continuously shoved down our throats. Not all uses of artificial intelligence are bad per se, but context matters, and particularly for large models, the environmental implications shouldn't be ignored.