- Scholastic Alchemy
- Posts
- Links and Commentary 5/23/25
Links and Commentary 5/23/25
More Yelling at Ezra Klein, SCOTUS Abides, Fraud in AI Research, Harvard Fights while Columbia Boos, The Case for Human Writing
Welcome to Scholastic Alchemy! I’m James and I write mostly about education. I find it fascinating and at the same time maddening. Scholastic Alchemy is my attempt to make sense of and explain the perpetual oddities around education, as well as to share my thoughts on related topics. On Wednesdays I post a long-ish dive into a topic of my choosing. On Fridays I post some links I’ve encountered that week and some commentary about what I’m sharing. Scholastic Alchemy will remain free for the foreseeable future but if you like my work and want to support me, please consider a paid subscription. If you have objections to Substack as a platform, I maintain a parallel version using BeeHiiv and you can subscribe there.
The post is a bit late this morning, apologies. I didn’t have a chance to pre-write the links like I usually do.
-James
More Yelling at Ezra Klein
I wrote about Ezra Klein’s interview with Rebecca Winthrop on Wednesday but wanted to make one more point. Most of what I wrote a few days ago was focused on how it seemed like Ezra was not listening to Winthrop and didn’t seem interested in doing anything other than broadcasting his anxiety about the whole thing. I did sort of connect his anxieties about AI to the distorted version of human capital theory whereby schools are somehow responsible for creating the economy and the job market. It’s backwards! Economists seem to get that schools are responding to market conditions, not creating them, but somehow everyone else doing policy or, I dunno, making a podcast about education can’t get it right. Let me explain how absurd this demand is if you really think about. Ezra starts the interview like this:
[EK] So I have a 3-year-old and a 6-year-old. I feel like I cannot predict with A.I. what it is that society will want from or reward in them in 15 or 16 years, which leads to these questions in the interim — How should they be educated? What should they be educated toward? — feel really uncertain to me. My confidence is very, very low that schools are set up now for the world they’re going to graduate into.
Ezra, my brother, I am with you right up until that last sentence. I agree that AI products have the potential to be very disruptive to the labor market and that we, as a society, are not going to be good a predicting what the world will look like in 15 or 16 years. I agree that today’s generative AI products force us to ask some hard questions about what it means to be educated and how one should become educated. These are important questions. But here’s my question: why do we expect schools to have the answers? In what world do schools have a magical crystal ball to predict the labor market 15 years in the future?
We have a multi-billion dollar system of venture capital whose entire job is to pick ideas for companies, give them money, and earn a return on that investment once those companies are successful. It’s literally their job to make educated guesses about the future and then bet money on those guesses. 9 in 10 of those investments fail. Nine times out of ten, they make the wrong prediction about the future and they’re not even looking 15 or 16 years out. And nobody in or around that industry thinks it’s bad that they have a 90% failure rate because that’s the nature of making bets on the future.
Somehow, though, schools are supposed to get it right all the time and over a much longer timeframe? What is the mechanism by which schools are supposed to make accurate predictions about the future so they can get Ezra Klein’s kids the specific skills needed for the jobs of tomorrow? Schools do not have the resources or expertise or models or years of experience that private equity has in making bets about the future, but they’re expected to do better? HOW DOES THAT WORK? It doesn’t even have to be PE/VC investments. What company or government or social institution besides schools are expected to accurately know the future or they’re deemed a failure?
On one hand, when Ezra says that his “confidence is very, very low” it makes sense. Nobody is good at predicting the future! But then why be all pissy and anxiety-ridden over it? Why blame schools for a flaw that seems universal to the human condition? It’s honestly maddening sometimes to hear smart, worldly, well-studied people say completely insane things.
SCOTUS Abides
The Supreme Court technically rejected attempts by a St. Isidore Catholic Virtual School to get access to public funds. I say technically rejected to highlight that this is not really a victory so much as an inconclusive draw. The court’s ruling was a literal draw at 4-4 with Justice Amy Coney Barrett recusing herself due to her personal connections with the charter school’s advisory team. And hey, good on her! I did not expect a principled move like that from today’s court. Certainly Thomas and Alito do not have that kind of integrity.
So, the lower court’s decision stands and the religious school cannot receive government funds. Why am I not celebrating? Well, for one, a 4-4 decision is not precedent setting and carries little legal weight beyond the immediate decision at hand. A different school could end up before SCOTUS next year and they could rule differently. Beyond that, a different school wouldn’t force ACB to recuse herself.
We’re not out of the woods here. Just a momentary pause in a clearing.
Fraud in AI Research
Benjamin Riley surveys some recent cases of fraud and questionable methods in AI research. One interesting note is that several of the studies are about AI in education which is why they caught my eye. The whole post and everything he links are worth a read but here are some highlights.
None of this is correct.
The paper looked at the impact of using GPT to generate lesson plans that were used to teach third-graders (not secondary students). It compared a section taught by one of the authors using these GPT-generated lesson plans to a randomly-selected class taught by a different teacher. They used students as the unit of analysis and found no significant difference. The meta-analysis reports an effect size of 0.07, but I’m not sure where they got that (they gave pre- and posttests, but the main analysis just looked at posttest scores).
I guess this paper technically meets the inclusion criteria for the meta-analysis, but that just shows that their inclusion criteria need to be revisited. Given that one paper I looked at is so clearly misclassified, I’m not sure I trust anything in the meta-analysis.
Another researcher on the listserv then chimed in with the suggestion that “people glance through a few of the other studies that were included here. It's a perfect example of garbage in garbage out.”
…
This study received a lot of attention, with write-ups in MIT Technology Review, Education Week, the 74, K-12 Dive, and MarkTechPost, among others. I was even quoted in The 74 story praising the study design, and observing that if its findings held up, it offered promise for future tutoring efforts.
But after closer review, aided by other education researchers, I no longer see that promise.
…
What I am prepared to say is that the authors of this study presented their findings in a way that deliberately plays up the positive impact of AI, while minimizing the evidence that calls into question the value of using AI to enhance tutoring. And I find this troubling.
Seems bad! He also mentions that AI tutoring study that had been done in Nigeria. I talked about it back in January and noted some of the study’s problems. Ben adds that the study still has not been published anywhere so, apparently, we just have to take researchers’ word for it?
Again, these sorts of gains should on their face raise major red flags, but what’s worse is that the study involved human teachers acting “as ‘orchestra conductors’…who guided students in using the LLM, mentoring them, and offering guidance and additional prompting.”
Oh you don’t say! Unsurprisingly, the kids who received this additional “orchestrated support” from teachers performed better on tests than the kids who received nothing at all, but who knows what role AI played. Of course, if we had a formal write-up perhaps we could tease this out, but despite promises that one would be forthcoming, nothing appears to have been published.2
If we keep seeing fraudulent and sloppy research practices with regard to AI in education, perhaps that’s telling us something about the utility of AI in education? Remember, there’s a long history of failed reform efforts that turn out to have been based on bad research!
Harvard Fights, Columbia Boos
Meanwhile at Columbia, where the administration decided to meet Trump’s demands, the acting president is booed. Seems like the appropriate reaction to me, too.
The Case for Human Writing
Peter Greene thinks there’s value in human writing. He references some of the same material that I have and ends up recommending Warner’s book, More than Words.
Trends in education have taught generations a devalued view of writing (and reading as well). Warner observes that we are on our second or third generation “of students who experience school not as an opportunity for learning, but a grim march through proficiencies, attached to extremely high stakes, stakes often measured by tests that are not reflective of genuine learning.” The result that he has seen in his own classrooms are students who have been “incentivized to not write but instead to produce writing related simulations.” In education, writing has become performance rather than communication, and if we want students to simply follow a robotic algorithm to create a language product--well, that is exactly the task that a LLM is well-suited to perform.
He doesn’t quite get to dead schooling theory, but I have a feeling that similar ideas will be emerging soon.
Thanks for reading!