• Scholastic Alchemy
  • Posts
  • We're getting something fundamentally wrong about AI, schools, and kids

We're getting something fundamentally wrong about AI, schools, and kids

Johnny can't read because he would rather watch Ani give Kahnmigo a blow job.

Welcome to Scholastic Alchemy! I’m James and I write mostly about education. I find it fascinating and at the same time maddening. Scholastic Alchemy is my attempt to make sense of and explain the perpetual oddities around education, as well as to share my thoughts on related topics. On Wednesdays I post a long-ish dive into a topic of my choosing. On Fridays I post some links I’ve encountered that week and some commentary about what I’m sharing. Scholastic Alchemy will remain free for the foreseeable future but if you like my work and want to support me, please consider a paid subscription. If you have objections to Substack as a platform, I maintain a parallel version using BeeHiiv and you can subscribe there.

I’m writing this during off hours of the week my second kid was born, so my thoughts may be a bit more scattered than usual. Apologies if sleep deprivation has impacted my writing and it will hopefully improve soon!

Something is off

I’ve been batting an ambiguity around in my mind for a few months. Something seems off about all the AI and education discourse. Something seems missing. The way commentators, tech companies, and teachers have talked about AI’s role in schools has felt incomplete. Until the last few weeks, I’ve been unable to put my finger on exactly what I thought was missing. The most succinct way I can say this is that we have been ignoring the ways kids are actually using AI for non-academic tasks, in their personal lives, and how that usage may prove incredibly problematic. We are, as always, almost exclusively focused on what AI means for students’ academic performance, and we are not focused on how using AI may damage children’s psychological development.

If you look at teachers and schools, the discussion around AI revolves around whether or not to incorporate AI into the classroom and how to best do so. Maybe they argue that kids are going to inevitably use AI for everything, so the best practice is to teach kids how to cheat using AI.

We’ve been acting like the adults at the end of Lord of the Flies, showing up to civilize kids who are already running the island. But they’re not savages—they’re just using AI better than we are. Pretending otherwise? That’s not discipline. That’s sabotage.

Kids aren’t cheating. They’re adapting. They’ve built their own ethics and uses for AI while we’re still pretending it’s optional. Calling that survival “dishonesty” isn’t discipline. It’s denial.

We’re not preparing them for the world they live in if we pretend AI doesn’t exist in their backpacks and browsers.

It is hard to imagine letting go of roles once seen as heroic. But here’s the truth: we no longer need teachers to deliver content. We don’t need teachers to “instruct” and test students over that instruction. Admittedly there’s a good bit of sadness as we think of the elimination of the role that we, and so many of our heroes' from that past age, played in the lives of students. But with AI for answers and AI for promoting learning, the hard truth is that we honestly don’t need traditional teachers anymore. We need skilled professionals who can help cause learning for each individual learner!

Pre-schoolers need to begin working on AI literacy. ChatGPT now has “Study Mode” which leverages the Socratic method and acts as a prompter, questioner, or guide rather than just providing the answers.

Alternatively, we have to get rid of AI in the classroom or at least act to limit its use so that we can tell whether students are learning what we intend for them to learn. Blue Books are back! Work on building communities of scholars.

Given these issues, we propose a third way that addresses the isolation that “personalization” brings and avoids the punitive policing that blue books imply. This fall, we plan to bring some writing back into the classroom not to catch cheaters, but to rebuild learning communities, emphasize the importance of writing to learning and thinking, and resist personalization.

AI-free spaces may be a necessity as AI embedded programs effectively intercept our students’ before they ever engage with any work.

“we need to think seriously about embedded-AI programs offering help, even before the student has read anything or meaningfully engaged with the content.

It may not be an instance of “consult AI is you need help.”

It may by “here’s an AI bot to help you, whether you need it or not.”

Economically valuable skills are complex skills.

However, it’s also true to say that very simple skills don’t command much economic value. If a new technology makes a task very easy and efficient, it can completely eliminate any economic value in providing the task. Suppose LLMs get so good that anyone can ask them about a legal issue and get a perfect answer. In that case, people are not going to pay someone a large hourly rate to ask the question for them! They’ll cut out the middleman and ask it themselves.

So a student who is typing an essay question into an LLM, handing the output to their teacher and justifying it on the grounds that “this is what professionals do in their day job” is not making a good argument.

If that really is all the professional is doing, their job is not going to be around for long.

Last year, my science teacher did a “responsible AI use” lecture in preparation for a multiweek take-home paper. We were told to “use it as a tool” and “thinking partner.” As I glanced around the classroom, I saw that many students had already generated entire drafts before our teacher had finished going over the rubric.

When teachers know their students are gaming the system and students know their teachers know, the relationship frays. Why bother listening to feedback when we didn’t write the work anyway? Why respect a teacher’s guidance when the online “tutor,” the one that answers instantly, is open in another tab? Why bother learning when schools are encouraging their teachers to deploy AI tools in the classroom and thereby effectively telling us we don’t need to learn?

Do you see what I mean yet? Do you feel that something crucial is missing from all this discourse about AI and schools? It’s all about how kids use AI for school. It’s all about how teachers use AI for teaching. It’s all about how AI will be important for work. This is all eerily similar to how we talked about bringing phones and 1-to-1 computers/tablets/Chromebooks into the classroom. All of the focus was on how to teach with tech, how to get the kids to use the tech, and how important that would be their future employability.

We know from experience, though, that the big problem with all this tech in schools has little to do with how kids use the tech for learning. The problem is that kids are using the tech to do anything and everything else. In my last links post, I shared the story of a parent who visited his kid’s middle school. It’s worth taking a look at it again here.

What I observed in class after class over two days was that while Chromebooks were in near-constant use atop students’ desks, vanishingly little on-screen activity was academic. Wholly unspooked by the nosey, rizzless dad a few feet behind them, slack-jawed girls with Hot Cheetos-stained fingers lazily browsed clothes on Amazon, while caterpillar-mustached boys doomscrolled pro sports scores and sneakers. On nearly every screen in the room, YouTube’s telltale red logo winked from atop at least one Chrome tab – if not three or four – streaming a continuous bounty of intellectual enrichment: Minecraft and Fortnite walkthroughs, smokey eye makeup tutorials, Marvel movie trailers, Jackass scrotal trauma clips, and a veritable firehose of vacuous short-form brainrot cross-posted from a little Chinese cyberweapon called TikTok.

Girls and boys alike brazenly played web games with the sound on – from Tetris and Bejeweled knockoffs to multiplayer basketball and first-person shooter murder orgies – filling the room with an almost casino-like ambient symphony of bloops, bleeps, and muffled explosions. But the most insidiously popular “game” in my son’s classes was “Spacebar Clicker” – in which the player does nothing but frantically click the Chromebook spacebar (one point per click) like a Skinner box rat on crystal meth. The game sociopathically implores kids, “Try to score 100M in one hour!” When multiple students simultaneously compete for bragging rights to “win” this stupefyingly moronic e-widget, the ensuing din is enough to make your ears weep grey matter. Sweet Jebus – this was what my son was complaining about! It was sheer torturous anarchy – a veritable black hole of learning – and what a shitbird I was for doubting him!

What I want you think now is all of the above PLUS AI. In all the writing and discussion and consternation about AI, there’s not much being said about how kids will be using AI for non-academic purposes, how they’ll have AI-enabled distractions, games, AI romantic partners, and all manner of other AI driven products at their fingertips, in their wearables, and constantly intruding on their consciousness.

Addictive Psychology

Let’s step away from schooling and kids and learning in particular and think about what AI companionship is doing to adult humans, especially to their psychology. Timothy Burke is as good a place to start as ever. He argues that AI is a marshmallow test. That is, AI offers so much in the way of instant gratification that humans are bound to pursue those ends over more meaningful, productive, and virtuous ends.

Think about all the stories that have made the press, most of which seem like they have got to be the tip of the iceberg of actual practice. Judges using AI for their rulings and lawyers using AI in their briefs only to be exposed when it turns out that the precedents they cite don’t exist. Novelists using AI to write an entire formulaic genre work only to get noticed when they leave the prompts and AI responses in the text. Students getting caught when their bibliographies contain references that don’t exist. Scholars getting caught when it turns out a manuscript being peer-reviewed had tiny invisible text instructing AI peer reviewers to be ridiculously generous to the publication under review. Government officials confidently proclaiming that they’ve asked AI to undertake a sensitive review of government documents that the AI itself doesn’t (or shouldn’t) have access to. Journalists citing “facts” that turn out to be AI hallucinations like Woodrow Wilson’s non-existent pardon of his nonexistent in-law Hunter D. Butts.

A lot of people are stuffing their gobs with easy marshmallows. A lot of marshmallow pushers are offering single marshmallows to people unable to refuse the temptation because it makes their expensive and reckless investment in generative AI look like it’s going to pay off the same way that the first generation of platform companies did, or even more so. Neither the companies nor their addicts want to wait for better outcomes.

We want the shortcut. The shortcut rewards us immediately. Developing a need for immediate rewards is addiction. Addiction is, of course, a really great business model. Natasha Schüll’s Addiction By Design comes to mind here. The gambling industry engages in psychological warfare, more or less, to keep gamblers engaged and spending money especially on their most profitable games, slot machines. Sports betting and casino gambling via mobile apps is now a multi-billion-dollar business in the US and like with traditional gambling, most of their revenues come from a small percentage of heavy users. And, it turns out, that somewhere around a quarter of adolescents and 15% of adults who gamble online may be addicts. Put another way, 25% of adolescents who gamble online have formed a psychological dependency on gambling, one which

“can inflict substantial harm on individuals, families, and communities. Beyond the obvious danger of financial losses and financial ruin, these harms can include loss of employment, broken relationships, health effects, and crime-related impacts. Gambling can heighten the risk of suicidality and domestic violence. Research evidence and firsthand accounts from individuals affected by gambling corroborate the association between gambling and these many and various detrimental effects.”

Social media is, of course, designed to follow gambling’s playbook and builds their applications to create similar psychological dependencies. Meta, in particular, has been on the receiving end of a lot of criticism around targeting teenagers in order to create addictive behaviors and even understood the harms they may have been causing.

David Greenfield, a psychologist and founder of the Center for Internet and Technology Addiction in West Hartford, Conn., said the devices lure users with some powerful tactics. One is “intermittent reinforcement,” which creates the idea that a user could get a reward at any time. But when the reward comes is unpredictable. “Just like a slot machine,” he said. As with a slot machine, users are beckoned with lights and sounds but, even more powerful, information and reward tailored to a user’s interests and tastes.

Adults are susceptible, he noted, but young people are particularly at risk, because the brain regions that are involved in resisting temptation and reward are not nearly as developed in children and teenagers as in adults. “They’re all about impulse and not a lot about the control of that impulse,” Dr. Greenfield said of young consumers.

Moreover, he said, the adolescent brain is especially attuned to social connections, and “social media is all a perfect opportunity to connect with other people.”

For example, we’ve found that the most active 10% of Twitter users produce 80% of all tweets from U.S. adults; that the 10% of U.S. congressional lawmakers with the most Facebook and Twitter followers receive roughly 80% of all audience engagement; and that 10% of the most popular YouTube channels produce 70% of all the videos from that group.

This is classic addiction. You know who drives revenues for alcoholic beverage companies? The heaviest drinkers. In fact, ~8% of drinkers consume more than half of all alcoholic drinks.

The prevalence of high average daily alcohol consumption among current drinkers was 8.2% for the NAS, accounting for 51.0% of total drinks consumed

One way to think about this is that companies focus mostly on their heaviest users because they represent a solid customer base and revenue. Serve your addicts well. If you are drinking a six-pack of beer once a month, AB InBev is not going to be very interested in you. You don’t represent much revenue to them. If you aren’t betting on sports or playing the slots regularly, casinos and gambling apps might spam you but you’re not getting the special parlay offers they send to their heaviest users. The casinos don’t care about the guy who plays a few hands of blackjack for an evening once a year. They’d need ten thousand of you to be worth one heavy user.

Another way companies think about this is to broaden the categories of users who they target. Get more addicts. Meta doesn’t do much to entice the guy who hasn’t updated his Facebook account since he got married in 2012. They’re not going to push Instagram on him like they do a 16 year old girl posting her shopping haul videos for friends to see. But that girl, as we’ve learned, is believed to be a lifetime of revenue opportunities so long as she remains heavily engaged in using Meta’s products. How great is it that gambling addicts don’t have to physically go to a casino or sports book anymore? They’ve broadened the pool of heavy users.

If you want a picture of the future, imagine AI slop stomping on a human face — forever.

I’m worried that AI is going to have to become another addiction-based technology. Hell, maybe it already has? Either way, at some point, these companies will have to make money and when that happens the easiest group for them to monetize are the people who are psychologically unable to stop using the product. We have good reason to believe that there is already a population of people unable to stop using generative AI products.

I mentioned back in June that we’re starting to see examples of AI warping people’s perception of reality. These heavy users (addicts?) spend so much time talking to LLMS that they’ve started to believe crazy things and ruin their own lives because of it. Central to this problem is the concept of the “neural howlround” where an AI loses track of its original parameters and begins to substitute in user generated parameters creating ever more distorted responses that are “resistant to correction”. What some AI communities have noticed is that there was a sudden uptick in this kind of behavior sometime over the last spring.

LLMs today are ego-reinforcing glazing-machines that reinforce unstable and narcissistic personalities to convince them that they've made some sort of incredible discovery or created a god or become a god. And there is a lot more crazy people than people realise. And AI is rizzing them up in a very unhealthy way at the moment.

There are specific clues that this has happened with a person - often the word "recursive" in regards to "their" AI.

Why am I mentioning it? Because we ban a bunch of people from this subreddit - over 100 already. And this month I've seen an uptick in these "howlround" posts.

This whole topic is so sad. It's unfortunate how many mentally unwell people are attracted to the topic of AI. I can see it getting worse before it gets better. I've seen sooo many posts where people link to their github which is pages of rambling pre prompt nonsense that makes their LLM behave like it's a god or something.

In part, this may have been because an update to ChatGPT caused it to engage in sycophantic behavior that included encouraging people to follow through on delusional thinking. John David Pressman has been documenting some of the main cases of “AI Psychosis” and it’s important to note that there are possible cases going back to 2023, more than a year before the ChatGPT update in question. He argues some of this is a moral panic, some is flaws in the design of LLMs, and some is a cultural shift where technology is replacing religion.

What I want to point out here, though, is that all of this assumes that there’s some kind of vulnerable population prone to giving into AI’s hallucinations and sycophancy — “mentally unwell people” as the reddit mod says above. And I have two thoughts here. As you might expect, the first is that this is textbook addiction. These behaviors are very much like what addicts experience. The second is that children are, by definition, a psychologically vulnerable population. Their psychological capacities are not fully formed.

But, beyond all these potential edge cases of AI psychosis, there is a population of heavy users that fly below the radar because their behaviors are less problematic and destructive, although for me they are equally worrying. The term AI Sloppers has been coined as a pejorative for people who use AI to make every single decision.

People are so ready to give themselves over to AI for all choices that they’re even planning to surround their children and families with AI companions, in part because they seem to sincerely believe that humans are bad for other humans.

We’re declaring bankruptcy on humans. Bring on the AI. In addition to integrating AI into as many facets of our lives as possible (our health, our work, our entertainment, and our personal lives), we’re designing an AI-integrated childhood for our kids—all while feeling like we’re helping them dodge a major bullet.

You might consider this path, too—that you should homeschool your kids with AI, give them AI companions, and help them through tough times with AI therapists.

And there are all kinds of people ready to fall for AI slop and to believe the marketing copy of anyone who slaps AI on the name of their products.

Ted Goia made the case recently that AI products will always tend toward villainy because the safeguards against human villains simply do not apply to distributed intelligences. Even more interesting, though, was a follow up post wherein Ted scolds the pro-AI crowd in his own comments section for the logical and moral failings of their arguments. He compares this to the way something scandalous, such as a strip joint, can be made more palatable by calling it a Gentleman’s Club.

a bigger issue troubled me here.

I saw something very frightening in most of these AI defenses—namely the desire to justify terrible actions by manipulating the definition of words.

Is it really possible to dismiss all this danger and mayhem—for example, a bot encouraging a woman to slit her wrists, and giving precise instructions how to do it—with sly word games?

This shocked me. Surely you can’t erase evil actions by linguistics.

But after I thought about it, I wasn’t really surprised. This is actually quite common nowadays. Social harm and degradation get cleansed by definition. You see this everyhere.

I call this the gentlemen’s club solution. Guys who go to strip joints are seedy and disreputable. So you change the name.

The strip joint becomes a gentlemen’s club. Now the clientele are gentlemen—by definition. Problem solved!

Do you find this persuasive?

I fear that what Ted is noticing as lacking in the comments on his posts is also a product of too much AI use. Getting fast and loose with language in order to gaslight users is one of the main problems we encounter when using LLMs and trying to call them on incorrect outputs. Time and again, the LLM will change the terms, alter the parameters, and push back against the user when caught in a lie. Why might heavy users of AI also exhibit this behavior? I think it’s because using AI conditions you to thinking and writing in a certain way. You have to craft your prompts carefully, interpret the output, and refine your next query or task. All the while, these LLMs are attempting to give you exactly what you ask for without actually understanding what it is that you want. The human and the machine go back and forth modifying each other’s language and thinking until they’re in agreement that the outputs are good enough. This is why sloppers is a term now. Those of us not perpetually plugged into ChatGPT can recognize the specific way of writing and thinking that derives from too much engagement with AI products. It’s akin to a teenager’s brain rot tendencies because they are too deeply enmeshed with TikTok.

AI slop meets meme stonk

Matt Levine writes an excellent newsletter, Money Stuff, where he mostly talks about markets and finance but often crosses paths with topics related to regulation, AI, crypto, and political science. It’s a good read and I highly recommend subscribing. In a recent post, he took a look at a recent meme stock rally and developed an interesting and novel theory to explain why this rally was different from the 2021 meme stock rally that spawned several movies and endless think pieces.

What I argued last week is that this is a new vector for the coordination of meme stocks. In 2021, retail investors pushed up the price of GameStop by getting together on a message board and egging one another on to buy GameStop. In 2025, a similar coordination function can perhaps be performed by ChatGPT: A million traders can go to their computers and ask what stocks to buy and be told “OpenDoor” and buy it. This could have similar first-order effects — if they all buy OpenDoor, its price will go up — but less staying power. If you’re all on Reddit, you’re all in it together, and there is some communal excitement about pushing the stock up. It is a social event as well as a money-making (or losing) one. If you’re all alone talking to a robot, it’s just not as much fun. You’re not going to check in with ChatGPT every hour. You have less emotional connection to your meme stocks if you get them from a robot.

What I wrote last week is that ChatGPT, trained on the internet, will have the sorts of internetty thought patterns that lead it to recommend heavily shorted meme-type stocks. Not only those thought patterns, though; one reader emailed:

I’m a fan of AI weirdness but I think you and some of your readers got Reddit answers out of ChatGPT because you actually did ask Reddit questions - do sane non-redditors ask for stocks with a 100x return? When I asked a simple layman question, “what stocks should I buy?”, ChatGPT told me (I had no saved memory) to buy [normal stocks with normal rationales].

I think that’s right. I don’t think ChatGPT will necessarily push meme stocks to everyone. But I think that the people who are looking for stocks with 100x returns — people who are already meme-stock-curious — would have gone to WallStreetBets in 2021, and they might go to ChatGPT in 2025, and ChatGPT might give them a simulacrum of the Reddit answer.

Levine and I are thinking along similar lines here and we should take notice of the important function that different kinds of users play in how he expects the outputs to appear. Not all users of ChatGPT asking it for stock advice will get advice to buy meme stocks. Some users, let’s call them normies, will ask something like ‘what’s a good stock to buy?’ and get some answer; ‘KO has a great dividend, why not try that?’ ChatGPT has matched the normie user’s query with some normie financial advice. But some users, perhaps those who are already enmeshed with their AI as a companion and who use language like the denizens of r/wallstreetbets (childlike language, if I may) will get very different answers. If they ask ChatGPT ‘which stock is GOING TO THE MOON?’ then ChatGPT will recognize that as the language of meme stock Reddit and will supply a meme stock answer: ‘buy PLTR, you retard.’ They will buy it, and the price will go up.

You thought I was just making up an example? Oh no, this is all too real.

How long this works is anyone’s guess but there is reason to believe that AI agents tasked with stock picking would eventually collude to drive prices up, so maybe we’ll see a sustained rally on the back of AI stock picking. I’m not a financial expert so who knows?

The more relevant point to make here is that heavy users of AI products have a subjectively different experience with those products than the typical normie user. Those effects are plausibly having real world impacts as users are shaped by their interactions with AI products and live their lives according to whatever bespoke reality ChatGPT is presenting them. A subset of those users has a hard time distinguishing the world LLMs present them from the real world and succumb to a kind of psychosis or howlround which the AI products seem designed to make worse.

Should kids have this experience, too?

The post is getting a bit unwieldy so let’s bring this back to where I started, which is noticing that all the discussions about AI in schools seems to ignore this other aspect of AI use. AI products resemble other forms of addictive technology. AI products seem to have deleterious effects on the psychology of heavy users. AI products structure their interactions with users based on user inputs, but they also condition their users to use language and think in a way conducive to working with AI which, I’d argue, weakens their recognition of reality.

My worry is that children are quite susceptible to these AI-related psychological problems. They lack executive functioning, frequently engage in all-or-nothing thinking, tend toward extremes of emotion, and often let social pressures shortcut their use of logic and reasoning. This is totally normal kid stuff, but we’ve never put kids in a context where they’re surrounded by sycophantic and addictive products that skew their understanding of the world. Actually, that’s not entirely true because we just went through this cycle over the last decade with social media and screen-based distractions in schools.

We know that smartphones and chromebooks and tablets and whatever other technology we equipped our kids with could be used for learning purposes. We expected them to be used for learning purposes. They weren’t. Whether in schools or out, kids will find ways to have fun and to fail the marshmallow test. Embedding AI products in everything that our kids touch may very well improve learning outcomes. AI-enabled personalized learning curricula and Socratic chatbots are worth a try but we have to acknowledge that many (most) kids won’t want to use these products. We already have a 5% problem with EdTech.

Beyond that, we have to acknowledge that kids are using AI products for non-school purposes, including while they’re in school. Just like with phones, those extra-scholastic uses will distract and detract from learning. Johnny can have a dozen fully realized AI girlfriends and can ask them to do all kinds of unseemly things. How does the algebra teacher compete with an 8th grader’s libido supercharged on every AI device the kid touches? AI girlfriends can text your phone via SMS. They can send you bawdy emails with lewd images. They can call you and use their AI capabilities to hold a conversation via voice. They can see what you see through your Meta glasses and talk to you through your earbuds. What, you think that kid is using smart glasses to cheat on his schoolwork? He’s listening to Ani simulate giving a blow job to Khanmigo.

This is the worst AI smart glasses will ever be.

Am I being hyperbolic? Absolutely. This won’t be every kid just like not every kid today is completely distracted by their phones. Not every user of generative AI becomes psychotic. Not all day trading AI-stock pickers are putting up diamond hands. But I worry it will be enough kids.

Why are we rebooting the phones-in-school franchise but this time with AI? We didn’t like the last movie. Parents are quite frustrated by phones in schools and their advocacy around this issue has become so intense that entire states are banning phones in schools. In ten years when we recognize the downsides of AI pouring into every corner of our kids’ lives, are we once again going to look at schools and ask why they allowed something so obviously bad? Who knows, maybe AI mania will see these phone bans overturned? After all, adults have AI on their phones in the workplace so kids should learn in a similar environment. Peak scholastic alchemy.

You get the point. There’s reason for caution, slowness, and deliberation around the use of AI in general but especially with regard to children using it. Schools can be a proactive force here, rather than the reactive mess they gave us last time around. There is a time and a place for kids to learn how generative AI works, what productive uses it has, and how using it can go wrong or shortchange their own learning. Likewise, we have to talk about the potential effects of letting AI drive all your decisions and the risks of losing touch with reality should AI products cloud your thinking too much. These topics can even be engaging! What kid wouldn’t want to hear about the time an AI therapist told its drug addict client to smoke meth? What kid doesn’t want to hear about the time AI encouraged someone to commit blood sacrifice? AI’s shortcomings can be strengths if we choose to learn from them and use them as a way to better understand how LLMs and other generative AI works while also cautioning ourselves against letting them run our lives into delusional mania and shortcut self-pleasure.

But we’ll never have these discussions if all we talk about is academic outcomes.

Thanks for reading.