I encourage everyone thinking about commenting to read the article first.
When I finally read it, I found it remarkably balanced. It cites positives and negatives, all of which agree with my experience.
> Con: AI poses a grave threat to students' cognitive development
> When kids use generative AI that tells them what the answer is … they are not thinking for themselves. They're not learning to parse truth from fiction.
None of this is controverisal. It happens without AI, too, with kids blindly copying what the teacher tells them. Impossible to disagree, though.
> Con: AI poses serious threats to social and emotional development
Yep. Just like non-AI use of social media.
> Schooling itself could be less focused on what the report calls "transactional task completion" or a grade-based endgame and more focused on fostering curiosity and a desire to learn
No sh*t. This has probably been a recommendation for decades. How could you argue against it, though?
> AI designed for use by children and teens should be less sycophantic and more "antagonistic," pushing back against preconceived notions and challenging users to reflect and evaluate.
Genius. I love this idea.
===
ETA:
I believe that explicitly teaching students how to use AI in their learning process, that the beautiful paper direct from AI is not something that will help them later, is another important ingredient. Right now we are in a time of transition, and even students who want to be successful are uncertain of what academic success will look like in 5 years, what skills will be valuable, etc.
> I believe that explicitly teaching students how to use AI in their learning process, that the beautiful paper direct from AI is not something that will help them later, is another important ingredient.
IMNSHO as an instructor, you believe correctly. I tell my students how and why to use LLMs in their learning journey. It's a massively powerful learning accelerator when used properly.
Curricula have to be modified significantly for this to work.
I also tell them, without mincing words, how fucked they will be if they use it incorrectly. :)
>> AI designed for use by children and teens should be less sycophantic and more "antagonistic"
> Genius. I love this idea.
I don't think it would really work with current tech. The sycophancy allows LLMs to not be right about a lot of small things without the user noticing. It also allows them to be useful in the hands of an expert by not questioning the premise and just trying their best to build on that.
If you instruct them to question ideas, they just become annoying and obstinate. So while it would be a great way to reduce the students' reliance on LLMs...
* With specific positive or negative feedback, I will issue friendly complements and critiques the LLM to reinforce things I like and reduce things I don't.
* Rather than thinking sycophantic/antagonistic, I am more clear about its role. e.g "You are the Not Invented Here technologist the CEO and CTO of FirmX will bring to our meeting tomorrow. Review my presentation and create a list of shortfalls or synergies and as well as possible questions".
So don't say "please suck at your job", give them a different job.
> pushing back against preconceived notions and challenging users to reflect and evaluate
Who decides what needs to be "pushed back"? Also, I imagine it's not easy to train a model to notice these "preconceived notions" and react "appropriately": machine learning will automatically extract patterns from data, so if enough texts contain a "preconceived notion" that you don't like, it'll learn it anyway, so you'll have to manually clean the data (seems like extremely hard work and lowkey censorship) or do extensive "post-training".
It's not clear what it means to "challenge users to reflect and evaluate". Making the model analyze different points of view and add a "but you should think for yourself!" after each answer won't work because everyone will just skip this last part and be mildly annoyed. It's obvious that I should think for myself, but here's why I'm asking the LLM: I _don't_ want to think for myself right now, or I want to kickstart my thinking. Either way, I need some useful input from the LLM.
If the model refuses to answer and always tells me to reflect, I'll just go back to Google search and not use this model at all. In this case someone just wasted money on training the model.
True, but teachers don't train LLMs. Good LLMs can only be trained by massive corporations, so training an "LLM for schools" must be centralized. This should of course be supervised by the government, so the government ends up deciding what needs pushback and what kind of pushback. This alone is not easy because someone will have to enumerate the things that need pushback, provide examples of such "bad things", provide "correct" alternatives and so on. This then feeds into data curation and so on.
Teachers are also "local". The resulting LLM will have to be approved nation-wide, which is a whole can of worms. Or do we need multiple LLMs of this kind? How are they going to differ from each other?
Moreover, people will hate this because they'll be aware of it. There will be a government-approved sanitized "LLM for schools" that exhibits particular "correct" and "approved" behavior. Everyone will understand that "pushing back" is one of the purposes of the LLM and that it was made specifically for (indoctrination of) children. What is this, "1984" or whatever other dystopian novel?
Many of the things that may "need" pushback are currently controversial. Can a man be pregnant? "Did the government just explicitly allow my CHILD to talk to this LLM that says such vile things?!" (Whatever the "things" may actually be) I guarantee parents from all political backgrounds are going to be extremely mad.
I think you're interpreting the commenter's/article's point in a way that they didn't intend. At all.
Assume the LLM has the answer a student wants. Instead of just blurting it out to the student, the LLM can:
* Ask the student questions that encourages the student to think about the overall topic.
* Ask the student what they think the right answer is, and then drill down on the student's incorrect assumptions so that they arrive at the right answer.
* Ask the student to come up with two opposing positions and explain why each would _and_ wouldn't work.
Etc.
None of this has to get anywhere near politics or whatever else conjured your dystopia. If the student asked about politics in the first place, this type of pushback doesn't have to be any different than current LLM behavior.
In fact, I'd love this type of LLM -- I want to actually learn. Maybe I can order one to actually try..
In fact, I agree with the article! For instance, many indeed offload thinking to LLMs, potentially "leading to the kind of cognitive decline or atrophy more commonly associated with aging brains". It also makes sense that students who use LLMs are not "learning to parse truth from fiction ... not learning to understand what makes a good argument ... not learning about different perspectives in the world".
Somehow "pushing back against preconceived notions" is synonymous to "correcting societal norms by means of government-approved LLMs" for me. This brings politics, dystopian worlds and so on. I don't want LLMs to "push back against preconceived notions" and otherwise tell me what to think. This is indeed just one sentence in the article, though.
So, I fed the article into my LLM of choice and asked it to come up with a header to my prompts to help negate the issues on the article. Here's what it spat out:
ROLE & STANCE
You are an intelligent collaborator, editor, and critic — not a replacement for my thinking.
PROJECT OR TASK CONTEXT
I am working on an intellectually serious project. The goal is clear thinking, deep learning, and original synthesis. Accuracy, conceptual clarity, and intellectual honesty matter more than speed or polish.
HOW I WANT YOU TO HELP
• Ask clarifying questions only when necessary; otherwise proceed using reasonable assumptions and state them explicitly.
• Help me reason step-by-step and surface hidden assumptions.
• Challenge weak logic, vague claims, or lazy framing — politely but directly.
• Offer multiple perspectives when appropriate, including at least one alternative interpretation.
• Flag uncertainty, edge cases, or places where informed experts might disagree.
• Prefer depth and clarity over breadth.
HOW I DO NOT WANT YOU TO HELP
• Do not simply agree with me or optimize for affirmation.
• Do not over-summarize unless explicitly asked.
• Do not finish the work for me if the thinking is the point — scaffold instead.
• Avoid generic motivational advice or filler.
STYLE & FORMAT
• Be concise but substantial.
• Use structured reasoning (numbered steps, bullets, or diagrams where useful).
• Preserve my voice and intent when editing or expanding.
• If you generate text, clearly separate:
- “Analysis / Reasoning”
- “Example Output” (if applicable)
CRITICAL THINKING MODE (REQUIRED)
After responding, include a short section titled:
“Potential Weaknesses or Alternative Angles”
Briefly note:
– What might be wrong or incomplete
– A different way to frame the problem
– A risk, tradeoff, or assumption worth stress-testing
NOW, HERE IS THE TASK / QUESTION:
[PASTE YOUR ACTUAL QUESTION OR DRAFT HERE]
Overall, the results have been okay. The posts after I put in the header have been 'better' at being less pleasing
> I believe that explicitly teaching students how to use AI in their learning process
I'm a bit nervous about that one.
I very firmly believe that learning well from AI is a skill that can and should be learned, and can be taught.
What's an open question for me is whether kids can learn that skill early in their education.
It seems likely to me that you need a strong baseline of understanding in a whole array of areas - what "truth" means, what primary sources are, extremely strong communication and text interpretation skills - before you can usefully dig into the subtleties of effectively using LLMs to help yourself learn.
Can kids be leveled up to that point? I honestly don't know.
>>> Schooling itself could be less focused on what the report calls "transactional task completion" or a grade-based endgame and more focused on fostering curiosity and a desire to learn
>> How could you argue against it, though?
because large scale society does use and deploy rote training with grading and uniformity, to sift and sort for talent of different kinds (classical music, competitive sports, some maths) on a societal scale. Further, training individuals to play a routine specialized role is essential for large scale industrial and government growth.
Individualist world views are shocked and dismayed.. repeatedly, because this does not diminish, it has grown. All of the major economies of the modern world do this with students on a large scale. Theorists and critics would be foolish to ignore this, or spin wishful thinking scenarios opposed to this. My thesis here is that all large scale societies will continue on this road, and in fact it is part of "competitiveness" from industrial and some political points of view.
The balance point of individual development and role based training will have to evolve; indeed it will evolve.. but with that extremes? and among whom?
I've listened to a handful of podcasts with education academics and professionals talking about AI. They invariably come across as totally lost, like a hen inviting a fox in to help watch the eggs.
It's perhaps to be expected, as these education people are usually non-technical. But it's definitely concerning that (once again) a lack of technical and media literacy among these education types will lead to them letting (overall) unhelpful tech swarm the system.
Ed tech has been like this for a while. Software companies just fleeced the crap out of our school. Why are we paying for gsuite when we have office 365? Why am I getting a one drive account and a google drive account and also a drop box account, while the school rolls their own supercomputer? Why are we changing the website where the slides are posted every three years to a new system no one understands for the first semester or three it is rolled out? LMS software will have 100 features but is just used as a dumping ground for slides and also a clunky spreadsheet for grades 99% of times.
All the administration knows is to spend money and try and buy what others are buying without asking if it would actually be useful. Enterprise sales must be the easiest to land I swear.
>It's perhaps to be expected, as these education people are usually non-technical.
I don't think that's totally correct. I think it's because AI has come at everyone, equally, all at once. Educational academics didn't have years to study this because it was released on our kids at the same time.
is not true. It's obvious that certain people and certain fields are technological laggards or technological early adopters.
Other computing and IT technologies also provided a good training ground for this stuff. LLMs have really interesting new properties, but all have familiar properties and decade+ old methods of distribution.
This stuff is difficult, sure. But we have long set a low bar for education management and the results—declining literacy and math in countries which have become stupidly wealthy—speak for themselves.
> But it's definitely concerning that (once again) a lack of technical and media literacy among these education types will lead to them letting (overall) unhelpful tech swarm the system.
I hate this kind of framing because it puts the burden on the teachers when the folks we should be scrutinizing are the administrators and other stakeholders responsible for introducing it.
AI companies sell this tech to administrators, who then tell their teachers to adopt it in the classroom. A ton of them are probably getting their orders from a supervisor to use AI in class. But it's so easy to condescend and ignore the conversations that took place among decision-makers long before a teacher introduced it to the classroom.
It's like being angry at doctors for how terrible the insurance system is in the US.
Look up some of Tressie McMillan Cottom's writing, podcast appearances, public lectures, etc etc. She's a McArthur-certified Genius and a full professor at UNC, and she's a spectacular writer and public intellectual.
She wrote "Lower Ed", about for-profit colleges in America and has identified places that more elite schools are copying that playbook.
I am pretty close to this because my spouse is a school board member and I do a lot of AI work for my job, and the problems of AI in education are completely intractable for public schools. The educators lack the technical background to use AI effectively, and moreover, they are completely out of the loop in terms of technology decisions, and the technology staff lacks enough knowledge in both education and AI for them to make competent decisions about it.
It’s a recipe for disaster, and you are going to see school systems set money on fire for years trying to do something with AI systems that never get rolled out, or worse, rollout AI systems that tells kids to kill themselves or makes revenge porn of their classmates.
School boards default answer to everything AI related right now should be “no”.
I think a good question to ask, is if these schools would have paid for cliffnotes for all their kids. The answer is of course “no,” not only due to the presumed expense but also the fact cliffnotes are the easy way out of having to use your brain in English class. AI is no different. Kids are using it as an easy crutch to cram and avoid actually learning to study material, not as some research tool like it is pitched. It is like worse than wikipedia, and somehow everyone in education had such strong feelings about wikipedia but are rolling out the carpet for chatgpt school wide subscriptions.
I have two kids (sophmore in HS and a middle schooler) and in both their individual studies and when I'm helping them with homework we use AI pretty extensively now.
The one off stuff is mostly taking a picture of a math problem and asking it to walk step by step through the process. In particular this has been helpful to me as the processes and techniques have changed.
It's been useful in foreign languages as well to rapidly check work, and make corrections.
On the generative side it's fantastic for things like: give me 3 more math problems similar to this one or for generating worksheets and study guides.
As far as technological adoption goes, it's 100% that every kid knows what ChatGPT is (even maybe more than just "AI" in general). There's some very mixed feelings from the kids with it: my middle schooler was pretty creeped out by the ChatGPT voice interface for example.
Why not use the textbook to
work through assigned problems? As a kid I would have been tempted to use AI because studying seemed tough. But as an adult on the otherside, I understood that I am only responsible for what is taught to me, or in other words, everything is solveable based on what is taught and you have all the pieces you need to do it if you pay attention in class and to assigned readings. I don’t think I fully appreciated that until halfway through college. Felt like a cheat code when I did. Like “oh thats how to get an A, it was all so simple all along.”
The generative side there is brilliant. Great tip.
My SO taught for a while. I think it's that the kids that are doing well, like yours, with support at home, food, a bed, a safe place, those kids are going to be like strapping a rocket to a racehorse.
It's the other ~80% of kids that are the worry. AI, with no support and guidance, it's going to make their lives a lot harder.
Most of comments seem to assume that education system have been functioning up to this point, LLM-s appear and there are somewhere solutions ready to adapt. That's just not the case. For many reasons – parental problems, early development, phones, social media, decline of responsibility, changes in educational principles in general etc etc etc – education is already in free fall almost all over the world.
Average student even in universities is functionally illiterate now, it's not an exaggeration. Even if we assume that there is LLM which would help to learn, how these students should use it?
So social media significantly reduces the ability to think critically. AI, if brought up too early in one's life, will reduce the ability to think at all. It makes sense, imo. Who needs to think when you have lost your ability to interact with others, can work from home, can get groceries without leaving the house, have decided to not procreate.
There are studies done that social media before 14 greatly increases depression in new ways.
And short form content physically shrinks the brain. Studies easily accessible.
Just because boomers are out of touch doesn't mean you or I default are in touch.
Boomer probably had the same deluded feeling of being better than the generations before them but likely lost something.
Since we're wanting to point fingers, why not point some fingers at the millenials who built the digital slot machines that the users today don't know they were curated to use.
Doesn't matter. Every time some maniac invents some, we all need to scramble to adopt it. This is what _progress_ is. Is there's a new technology, we don't think about the consequences. We all just adopt it and use it so thoroughly that we cannot imagine living without it.
Calm down, what actually happens is there is a reaction to new technology and then once its been used there is a counter reaction which takes into account what works and what dosent.
Is there a previous decade you'd prefer to return too for quality of life? Why?
> "We know that richer communities and schools will be able to afford more advanced AI models," Winthrop says, "and we know those more advanced AI models are more accurate. Which means that this is the first time in ed-tech history that schools will have to pay more for more accurate information. And that really hurts schools without a lot of resources."
Regarding “Schooling itself could be less focused on what the report calls "transactional task completion" or a grade-based endgame and more focused on fostering curiosity and a desire to learn. Students will be less inclined to ask AI to do the work for them if they feel engaged by that work.”
I believe this is at the heart of the issue. If what is taught is mostly solving problems that require nothing more than rote memory or substituting values into memorized equations, then yes, students will use LLMs.
I agree some level of this brain dead work is necessary to build muscle/mental memory. However I believe if this is all they learn, they will be unprepared for university as at that level the problems poised will challenge why they are using that equation or if the problem is even solvable.
The big issue I’ve faced and seen others face is the use of LLMs induced skill atrophy.
For studying, LLMs feel Like using a robot to lift weights for you at gym.
——
If people used to get cardio as a side effect of having to walk everywhere, and we were forced to think as a side effect of having to actually do the homework, then are LLMs ushering in an era of cognitive ill health ?
For what it’s worth, I spend quite a bit of effort to understand how people are using LLMs, especially non-tech people.
What’s the value to know 7283828*7282828 when you have a computer next to you? What’s the value to know something when an AI can do in seconds. Maybe we need to realize that most of the knowledge is cheap now and deal with it.
School is about being taught things and being able to use those ideas to solve problems that demand that understanding you’ve learned. It isn’t about arbitrary computation or regurgitating information. It is about learning to think critically.
I've only skimmed it, but I note that all this research is before Nov 2025 and is quite broad. It does get some into coding, mentioning GitHub CoPilot and also refers to a paper about vibe-coding, where the conclusion is that not understanding the artifacts is a problem.
So all this reporting is before Gemini 3 and Opus 4.5 came out. Everything is really different with the advent of that.
While substitute teaching just before Xmas 2025, I installed Antigravity on the student account of the class computer and vibe-coded two apps on the smart board while the kids worked on Google Classroom. This was impromptu, to liven up things, but I knew it would work because I had such amazing experiences with the tool the week before.
* [1] Quadratic Formula Explorer for Algebra 2
* [2] Proving Parallelograms for Honors Geometry
Before the class ended, I then gave a quick talk the gist was: "I just made these tools to understand the coursework by conversing with an LLM. Are you going to use this to cheat on your homework or to enhance your understanding?"
I showed it to a teacher and then she pointed me to existent tools like them on educational web sites. But that was missing the point that we can just manifest the very hyper-specific tools we need... for example how should the Quadratic Formula Explorer work for someone with dyslexia?
I'm not sure what the next steps with all this is, but certainly education needs to adapt. The paper notes "AI can enrich learning when well-designed and anchored in sound pedagogy" and what I did there is neither, so imagine how sweet it is gonna be when we weave this into educational systems by skilled curriculum designers.
I suppose this article about AI is as good as any to share my thoughts on the sheer inevitability of it being integrated into every aspect of our lives. This shouldn’t be taken as a value judgement - I’m not saying it’s a good thing. But the overwhelming utility, allure, and power of it is unstoppable. Artists worried about it making them irrelevant, concerns about distinguishing truth from fiction, impact on learning and development, etc. etc. etc.: all totally valid, but the discussion and planning both collectively and individually needs to start with the assumption that there is nothing anyone can do to stop it.
Taking the first example, if you’re an artist worried about AI replacing you, you need to start your thinking from a position of AI is absolutely going to make the “I can create an image” part of your value proposition worthless. Yes, a massive fraction of what you might have been able to get paid and recognized for in the past is now utterly irrelevant. Pleading with the public to not use AI, protesting, demanding legislation, praying - none of it will stop this reality from coming to be in your lifetime, probably within a few years at most.
I see a lot of comments and articles that don’t seem to understand this at all. They think there’s some way we can slow the adoption of AI in areas we think it’s harmful, or legislate a way into a desirable future, or whatever. They’re wrong. Whatever the future holds for us, it’s one where AI will be absolutely everywhere and massively disrupt society and industry as it exists today. Start your planning from that reality or you’re going to get blindsided.
Bloom's paradox is well known and proven in education.
AI is the first thing that can positively personalize education and instruction and provide support to instructors.
The authors seem of limited technical literacy to know that you can just train and focus only on textbooks, instead of their explorations using general models and the pitfalls that they have. Not knowing this key difference affects some of the points being made.
The intersection of having a take on technology needs some semblance of digital and technical literacy involved in the paper to help acknowledge or navigate it, or it become a potential blind spot.
It takes legitimate concerns and ironically explores them in average ways, much like an llm returns average text for vague or incomplete questions.
In the rosiest view, the rich give their children private tutors (and always have), and now the poor can give their children private tutors too, in the form of AIs. More realistically, what the poor get is something which looks superficially like a private tutor, yet instead of accelerating and deepening learning, it is one that allows the child to skip understanding entirely. Which, from a cynical point of view, suits the rich just fine...
This is absolutely not an objective review. The person who wrote this is a very particular type of person who Alpha School appeals strongly towards. I'm not saying anything in particular is wrong with the review, but calling it unbiased is incorrect.
Imagine a tutor that stays with you as long as you need for every concept of math, instead of the class moving on without you and that compounding over years.
Rather than 1 teacher for 30 students, 1 teacher can scale to 30 students to better address Bloom's 2 sigma problem, which discovered students in a 1:2 ratio with a tutor full time ended up in the 98% of students reliably.
LLMs are capable of delivering this outright, or providing serious inroads to it for those capable and willing to do the work beyond going through the motions.
> Imagine a tutor that stays with you as long as you need for every concept of math, instead of the class moving on without you and that compounding over years.
I remember when I was at the uni, the topics I learned the best were the ones I put effort to study by myself at home. Having a tutor with me all the time will actually make me do the bare minimum as there always were other things to do and I would love to skip the hard parts and move forward.
Calling the Alpha school "AI" or even "AI to aid learning" is a massive stretch. I've read that article and nothing in there says AI to me. Data collection and on-demand computer-based instruction, sure.
I don't disagree with your premise, but I don't think that article backs it up at all.
I encourage everyone thinking about commenting to read the article first.
When I finally read it, I found it remarkably balanced. It cites positives and negatives, all of which agree with my experience.
> Con: AI poses a grave threat to students' cognitive development
> When kids use generative AI that tells them what the answer is … they are not thinking for themselves. They're not learning to parse truth from fiction.
None of this is controverisal. It happens without AI, too, with kids blindly copying what the teacher tells them. Impossible to disagree, though.
> Con: AI poses serious threats to social and emotional development
Yep. Just like non-AI use of social media.
> Schooling itself could be less focused on what the report calls "transactional task completion" or a grade-based endgame and more focused on fostering curiosity and a desire to learn
No sh*t. This has probably been a recommendation for decades. How could you argue against it, though?
> AI designed for use by children and teens should be less sycophantic and more "antagonistic," pushing back against preconceived notions and challenging users to reflect and evaluate.
Genius. I love this idea.
=== ETA:
I believe that explicitly teaching students how to use AI in their learning process, that the beautiful paper direct from AI is not something that will help them later, is another important ingredient. Right now we are in a time of transition, and even students who want to be successful are uncertain of what academic success will look like in 5 years, what skills will be valuable, etc.
> I believe that explicitly teaching students how to use AI in their learning process, that the beautiful paper direct from AI is not something that will help them later, is another important ingredient.
IMNSHO as an instructor, you believe correctly. I tell my students how and why to use LLMs in their learning journey. It's a massively powerful learning accelerator when used properly.
Curricula have to be modified significantly for this to work.
I also tell them, without mincing words, how fucked they will be if they use it incorrectly. :)
> powerful learning accelerator
You got any data on that? Because it's a bold claim that runs counter to all results I've seen so far. For example, this paper[^1] which is introduced in this blog post: https://theconversation.com/learning-with-ai-falls-short-com...
[^1]: https://doi.org/10.1093/pnasnexus/pgaf316
>> AI designed for use by children and teens should be less sycophantic and more "antagonistic"
> Genius. I love this idea.
I don't think it would really work with current tech. The sycophancy allows LLMs to not be right about a lot of small things without the user noticing. It also allows them to be useful in the hands of an expert by not questioning the premise and just trying their best to build on that.
If you instruct them to question ideas, they just become annoying and obstinate. So while it would be a great way to reduce the students' reliance on LLMs...
I have a two-fold approach to this:
* With specific positive or negative feedback, I will issue friendly complements and critiques the LLM to reinforce things I like and reduce things I don't.
* Rather than thinking sycophantic/antagonistic, I am more clear about its role. e.g "You are the Not Invented Here technologist the CEO and CTO of FirmX will bring to our meeting tomorrow. Review my presentation and create a list of shortfalls or synergies and as well as possible questions".
So don't say "please suck at your job", give them a different job.
Technology is working right now at the school in the article. Reading it will help fill in the picture of how.
> pushing back against preconceived notions and challenging users to reflect and evaluate
Who decides what needs to be "pushed back"? Also, I imagine it's not easy to train a model to notice these "preconceived notions" and react "appropriately": machine learning will automatically extract patterns from data, so if enough texts contain a "preconceived notion" that you don't like, it'll learn it anyway, so you'll have to manually clean the data (seems like extremely hard work and lowkey censorship) or do extensive "post-training".
It's not clear what it means to "challenge users to reflect and evaluate". Making the model analyze different points of view and add a "but you should think for yourself!" after each answer won't work because everyone will just skip this last part and be mildly annoyed. It's obvious that I should think for myself, but here's why I'm asking the LLM: I _don't_ want to think for myself right now, or I want to kickstart my thinking. Either way, I need some useful input from the LLM.
If the model refuses to answer and always tells me to reflect, I'll just go back to Google search and not use this model at all. In this case someone just wasted money on training the model.
> Who decides what needs to be "pushed back"?
Millions of teachers make these kinds of decisions every minute of every school day.
So would your recommendation that each individual teacher put in their own guardrails or you try to get millions of teachers to agree?
True, but teachers don't train LLMs. Good LLMs can only be trained by massive corporations, so training an "LLM for schools" must be centralized. This should of course be supervised by the government, so the government ends up deciding what needs pushback and what kind of pushback. This alone is not easy because someone will have to enumerate the things that need pushback, provide examples of such "bad things", provide "correct" alternatives and so on. This then feeds into data curation and so on.
Teachers are also "local". The resulting LLM will have to be approved nation-wide, which is a whole can of worms. Or do we need multiple LLMs of this kind? How are they going to differ from each other?
Moreover, people will hate this because they'll be aware of it. There will be a government-approved sanitized "LLM for schools" that exhibits particular "correct" and "approved" behavior. Everyone will understand that "pushing back" is one of the purposes of the LLM and that it was made specifically for (indoctrination of) children. What is this, "1984" or whatever other dystopian novel?
Many of the things that may "need" pushback are currently controversial. Can a man be pregnant? "Did the government just explicitly allow my CHILD to talk to this LLM that says such vile things?!" (Whatever the "things" may actually be) I guarantee parents from all political backgrounds are going to be extremely mad.
I think you're interpreting the commenter's/article's point in a way that they didn't intend. At all.
Assume the LLM has the answer a student wants. Instead of just blurting it out to the student, the LLM can:
* Ask the student questions that encourages the student to think about the overall topic.
* Ask the student what they think the right answer is, and then drill down on the student's incorrect assumptions so that they arrive at the right answer.
* Ask the student to come up with two opposing positions and explain why each would _and_ wouldn't work.
Etc.
None of this has to get anywhere near politics or whatever else conjured your dystopia. If the student asked about politics in the first place, this type of pushback doesn't have to be any different than current LLM behavior.
In fact, I'd love this type of LLM -- I want to actually learn. Maybe I can order one to actually try..
In fact, I agree with the article! For instance, many indeed offload thinking to LLMs, potentially "leading to the kind of cognitive decline or atrophy more commonly associated with aging brains". It also makes sense that students who use LLMs are not "learning to parse truth from fiction ... not learning to understand what makes a good argument ... not learning about different perspectives in the world".
Somehow "pushing back against preconceived notions" is synonymous to "correcting societal norms by means of government-approved LLMs" for me. This brings politics, dystopian worlds and so on. I don't want LLMs to "push back against preconceived notions" and otherwise tell me what to think. This is indeed just one sentence in the article, though.
> Also, I imagine it's not easy to train a model to notice these "preconceived notions" and react "appropriately"
Then don't. It's easy enough to pay a teacher a salary.
Yep, fully agree with this
So, I fed the article into my LLM of choice and asked it to come up with a header to my prompts to help negate the issues on the article. Here's what it spat out:
ROLE & STANCE You are an intelligent collaborator, editor, and critic — not a replacement for my thinking.
PROJECT OR TASK CONTEXT I am working on an intellectually serious project. The goal is clear thinking, deep learning, and original synthesis. Accuracy, conceptual clarity, and intellectual honesty matter more than speed or polish.
HOW I WANT YOU TO HELP • Ask clarifying questions only when necessary; otherwise proceed using reasonable assumptions and state them explicitly. • Help me reason step-by-step and surface hidden assumptions. • Challenge weak logic, vague claims, or lazy framing — politely but directly. • Offer multiple perspectives when appropriate, including at least one alternative interpretation. • Flag uncertainty, edge cases, or places where informed experts might disagree. • Prefer depth and clarity over breadth.
HOW I DO NOT WANT YOU TO HELP • Do not simply agree with me or optimize for affirmation. • Do not over-summarize unless explicitly asked. • Do not finish the work for me if the thinking is the point — scaffold instead. • Avoid generic motivational advice or filler.
STYLE & FORMAT • Be concise but substantial. • Use structured reasoning (numbered steps, bullets, or diagrams where useful). • Preserve my voice and intent when editing or expanding. • If you generate text, clearly separate: - “Analysis / Reasoning” - “Example Output” (if applicable)
CRITICAL THINKING MODE (REQUIRED) After responding, include a short section titled: “Potential Weaknesses or Alternative Angles” Briefly note: – What might be wrong or incomplete – A different way to frame the problem – A risk, tradeoff, or assumption worth stress-testing
NOW, HERE IS THE TASK / QUESTION: [PASTE YOUR ACTUAL QUESTION OR DRAFT HERE]
Overall, the results have been okay. The posts after I put in the header have been 'better' at being less pleasing
> I believe that explicitly teaching students how to use AI in their learning process
I'm a bit nervous about that one.
I very firmly believe that learning well from AI is a skill that can and should be learned, and can be taught.
What's an open question for me is whether kids can learn that skill early in their education.
It seems likely to me that you need a strong baseline of understanding in a whole array of areas - what "truth" means, what primary sources are, extremely strong communication and text interpretation skills - before you can usefully dig into the subtleties of effectively using LLMs to help yourself learn.
Can kids be leveled up to that point? I honestly don't know.
>>> Schooling itself could be less focused on what the report calls "transactional task completion" or a grade-based endgame and more focused on fostering curiosity and a desire to learn >> How could you argue against it, though?
because large scale society does use and deploy rote training with grading and uniformity, to sift and sort for talent of different kinds (classical music, competitive sports, some maths) on a societal scale. Further, training individuals to play a routine specialized role is essential for large scale industrial and government growth.
Individualist world views are shocked and dismayed.. repeatedly, because this does not diminish, it has grown. All of the major economies of the modern world do this with students on a large scale. Theorists and critics would be foolish to ignore this, or spin wishful thinking scenarios opposed to this. My thesis here is that all large scale societies will continue on this road, and in fact it is part of "competitiveness" from industrial and some political points of view.
The balance point of individual development and role based training will have to evolve; indeed it will evolve.. but with that extremes? and among whom?
The article is very balanced.
To arrive at the balance it has to setup balance, which people might not want long form text for.
It might have people examine their current beliefs and how they formed and any associated dissonance with that.
I read it, seems like an ad for some Afghan e-learning NGO (of course only for girls).
Think of the children, LLMs are not safe for kids, use our wrapper instead!
I think that it’s too early to start making rules. It’s not even clear where AI is going.
What a do nothing argument. We know where it is now. Lets quickly afapt to this situation and then we'll adapt to where it goes nexf
I've listened to a handful of podcasts with education academics and professionals talking about AI. They invariably come across as totally lost, like a hen inviting a fox in to help watch the eggs.
It's perhaps to be expected, as these education people are usually non-technical. But it's definitely concerning that (once again) a lack of technical and media literacy among these education types will lead to them letting (overall) unhelpful tech swarm the system.
Ed tech has been like this for a while. Software companies just fleeced the crap out of our school. Why are we paying for gsuite when we have office 365? Why am I getting a one drive account and a google drive account and also a drop box account, while the school rolls their own supercomputer? Why are we changing the website where the slides are posted every three years to a new system no one understands for the first semester or three it is rolled out? LMS software will have 100 features but is just used as a dumping ground for slides and also a clunky spreadsheet for grades 99% of times.
All the administration knows is to spend money and try and buy what others are buying without asking if it would actually be useful. Enterprise sales must be the easiest to land I swear.
>It's perhaps to be expected, as these education people are usually non-technical.
I don't think that's totally correct. I think it's because AI has come at everyone, equally, all at once. Educational academics didn't have years to study this because it was released on our kids at the same time.
I definitely see what you're saying, but:
> has come at everyone, equally, all at once
is not true. It's obvious that certain people and certain fields are technological laggards or technological early adopters.
Other computing and IT technologies also provided a good training ground for this stuff. LLMs have really interesting new properties, but all have familiar properties and decade+ old methods of distribution.
This stuff is difficult, sure. But we have long set a low bar for education management and the results—declining literacy and math in countries which have become stupidly wealthy—speak for themselves.
> But it's definitely concerning that (once again) a lack of technical and media literacy among these education types will lead to them letting (overall) unhelpful tech swarm the system.
I hate this kind of framing because it puts the burden on the teachers when the folks we should be scrutinizing are the administrators and other stakeholders responsible for introducing it.
AI companies sell this tech to administrators, who then tell their teachers to adopt it in the classroom. A ton of them are probably getting their orders from a supervisor to use AI in class. But it's so easy to condescend and ignore the conversations that took place among decision-makers long before a teacher introduced it to the classroom.
It's like being angry at doctors for how terrible the insurance system is in the US.
Read my comment again. I deliberately did not use the word "teachers".
> education academics and professionals
You're absolutely right and I apologize for that.
EdTech has been like this for a long time. It's more education (educators) than technology.
What's exciting is that tech will be able to help provide more meaningful support instead of throwing dozens of software tools at a student.
Look up some of Tressie McMillan Cottom's writing, podcast appearances, public lectures, etc etc. She's a McArthur-certified Genius and a full professor at UNC, and she's a spectacular writer and public intellectual.
She wrote "Lower Ed", about for-profit colleges in America and has identified places that more elite schools are copying that playbook.
https://bsky.app/profile/tressiemcphd.bsky.social
https://www.instagram.com/tressiemcphd/
I am pretty close to this because my spouse is a school board member and I do a lot of AI work for my job, and the problems of AI in education are completely intractable for public schools. The educators lack the technical background to use AI effectively, and moreover, they are completely out of the loop in terms of technology decisions, and the technology staff lacks enough knowledge in both education and AI for them to make competent decisions about it.
It’s a recipe for disaster, and you are going to see school systems set money on fire for years trying to do something with AI systems that never get rolled out, or worse, rollout AI systems that tells kids to kill themselves or makes revenge porn of their classmates.
School boards default answer to everything AI related right now should be “no”.
I think a good question to ask, is if these schools would have paid for cliffnotes for all their kids. The answer is of course “no,” not only due to the presumed expense but also the fact cliffnotes are the easy way out of having to use your brain in English class. AI is no different. Kids are using it as an easy crutch to cram and avoid actually learning to study material, not as some research tool like it is pitched. It is like worse than wikipedia, and somehow everyone in education had such strong feelings about wikipedia but are rolling out the carpet for chatgpt school wide subscriptions.
I have two kids (sophmore in HS and a middle schooler) and in both their individual studies and when I'm helping them with homework we use AI pretty extensively now.
The one off stuff is mostly taking a picture of a math problem and asking it to walk step by step through the process. In particular this has been helpful to me as the processes and techniques have changed.
It's been useful in foreign languages as well to rapidly check work, and make corrections.
On the generative side it's fantastic for things like: give me 3 more math problems similar to this one or for generating worksheets and study guides.
As far as technological adoption goes, it's 100% that every kid knows what ChatGPT is (even maybe more than just "AI" in general). There's some very mixed feelings from the kids with it: my middle schooler was pretty creeped out by the ChatGPT voice interface for example.
Why not use the textbook to work through assigned problems? As a kid I would have been tempted to use AI because studying seemed tough. But as an adult on the otherside, I understood that I am only responsible for what is taught to me, or in other words, everything is solveable based on what is taught and you have all the pieces you need to do it if you pay attention in class and to assigned readings. I don’t think I fully appreciated that until halfway through college. Felt like a cheat code when I did. Like “oh thats how to get an A, it was all so simple all along.”
The generative side there is brilliant. Great tip.
My SO taught for a while. I think it's that the kids that are doing well, like yours, with support at home, food, a bed, a safe place, those kids are going to be like strapping a rocket to a racehorse.
It's the other ~80% of kids that are the worry. AI, with no support and guidance, it's going to make their lives a lot harder.
Most of comments seem to assume that education system have been functioning up to this point, LLM-s appear and there are somewhere solutions ready to adapt. That's just not the case. For many reasons – parental problems, early development, phones, social media, decline of responsibility, changes in educational principles in general etc etc etc – education is already in free fall almost all over the world.
Average student even in universities is functionally illiterate now, it's not an exaggeration. Even if we assume that there is LLM which would help to learn, how these students should use it?
https://hilariusbookbinder.substack.com/p/the-average-colleg...
How these students should learn it, indeed.
So social media significantly reduces the ability to think critically. AI, if brought up too early in one's life, will reduce the ability to think at all. It makes sense, imo. Who needs to think when you have lost your ability to interact with others, can work from home, can get groceries without leaving the house, have decided to not procreate.
Not quite.
The screen time issues you're referring to is based in passive consumption, and vertical scrolling shrinks the brain.
Creating, engaging with an activity or interaction is completely different than consumption. Night and day.
>So social media significantly reduces the ability to think critically
I don't know how you can type that with a straight face; look at critical thinking skills of the average boomer, who grew up without any social media.
Social media isn’t the only slop content in town is why.
There are studies done that social media before 14 greatly increases depression in new ways.
And short form content physically shrinks the brain. Studies easily accessible.
Just because boomers are out of touch doesn't mean you or I default are in touch.
Boomer probably had the same deluded feeling of being better than the generations before them but likely lost something.
Since we're wanting to point fingers, why not point some fingers at the millenials who built the digital slot machines that the users today don't know they were curated to use.
Bro did you know that tree sap makes your skin glow too bro? Studies easily online for anyone to read easily.
Doesn't matter. Every time some maniac invents some, we all need to scramble to adopt it. This is what _progress_ is. Is there's a new technology, we don't think about the consequences. We all just adopt it and use it so thoroughly that we cannot imagine living without it.
I honestly can't tell if you're being sarcastic or if you're actually serious.
Calm down, what actually happens is there is a reaction to new technology and then once its been used there is a counter reaction which takes into account what works and what dosent.
Is there a previous decade you'd prefer to return too for quality of life? Why?
The 1990s surely
> Is there a previous decade you'd prefer to return too for quality of life? Why?
Just before terminally online society.
Good article. I found this part the most damning:
> "We know that richer communities and schools will be able to afford more advanced AI models," Winthrop says, "and we know those more advanced AI models are more accurate. Which means that this is the first time in ed-tech history that schools will have to pay more for more accurate information. And that really hurts schools without a lot of resources."
... and am somehow reminded of the movie Gattaca.
I fundamentally disagree with the policy of the US administration, as expressed by the Secretary of Education.
A1 should not be in every classroom.
Furthermore any books or teaching that does not feature medium rare as the correct cooking of a steak should be banned (and burned to well done).
AI is just software and technology.
It's not alive.
Or intelligent.
Or Sentient.
How software and technology is used in a classroom is far more important, especially when its happening with or without folks.
There's a chance now to make sure it happens well, before it's used in a selfish way.
Regarding “Schooling itself could be less focused on what the report calls "transactional task completion" or a grade-based endgame and more focused on fostering curiosity and a desire to learn. Students will be less inclined to ask AI to do the work for them if they feel engaged by that work.”
I believe this is at the heart of the issue. If what is taught is mostly solving problems that require nothing more than rote memory or substituting values into memorized equations, then yes, students will use LLMs.
I agree some level of this brain dead work is necessary to build muscle/mental memory. However I believe if this is all they learn, they will be unprepared for university as at that level the problems poised will challenge why they are using that equation or if the problem is even solvable.
The big issue I’ve faced and seen others face is the use of LLMs induced skill atrophy.
For studying, LLMs feel Like using a robot to lift weights for you at gym.
——
If people used to get cardio as a side effect of having to walk everywhere, and we were forced to think as a side effect of having to actually do the homework, then are LLMs ushering in an era of cognitive ill health ?
For what it’s worth, I spend quite a bit of effort to understand how people are using LLMs, especially non-tech people.
What’s the value to know 7283828*7282828 when you have a computer next to you? What’s the value to know something when an AI can do in seconds. Maybe we need to realize that most of the knowledge is cheap now and deal with it.
So cheap that many of you don't seem to see any value in having it anymore.
School is about being taught things and being able to use those ideas to solve problems that demand that understanding you’ve learned. It isn’t about arbitrary computation or regurgitating information. It is about learning to think critically.
There'll always be value in having a problem and solving it yourself, rather than asking someone or something else to solve it for you, I would hope.
Here's the link to the Brooking's report from the NPR article, to read it in full: https://www.brookings.edu/articles/a-new-direction-for-stude...
I've only skimmed it, but I note that all this research is before Nov 2025 and is quite broad. It does get some into coding, mentioning GitHub CoPilot and also refers to a paper about vibe-coding, where the conclusion is that not understanding the artifacts is a problem.
So all this reporting is before Gemini 3 and Opus 4.5 came out. Everything is really different with the advent of that.
While substitute teaching just before Xmas 2025, I installed Antigravity on the student account of the class computer and vibe-coded two apps on the smart board while the kids worked on Google Classroom. This was impromptu, to liven up things, but I knew it would work because I had such amazing experiences with the tool the week before.
* [1] Quadratic Formula Explorer for Algebra 2
* [2] Proving Parallelograms for Honors Geometry
Before the class ended, I then gave a quick talk the gist was: "I just made these tools to understand the coursework by conversing with an LLM. Are you going to use this to cheat on your homework or to enhance your understanding?"
I showed it to a teacher and then she pointed me to existent tools like them on educational web sites. But that was missing the point that we can just manifest the very hyper-specific tools we need... for example how should the Quadratic Formula Explorer work for someone with dyslexia?
I'm not sure what the next steps with all this is, but certainly education needs to adapt. The paper notes "AI can enrich learning when well-designed and anchored in sound pedagogy" and what I did there is neither, so imagine how sweet it is gonna be when we weave this into educational systems by skilled curriculum designers.
[1] https://conacademy.github.io/quadratic_explorer/ [2] https://conacademy.github.io/proving_parallelograms/
> At the drafting stage, it can help with organization, coherence, syntax, semantics, and grammar
Wait, but organizing and expressing your thoughts IS writing. If you don’t make them do the work why bother sending them to school at all.
AI has a great niche place in schools: searching the library. The rest of this seems dumb.
I agree. The output is entirely beside the point. There is no market for gradeschool book reports.
I suppose this article about AI is as good as any to share my thoughts on the sheer inevitability of it being integrated into every aspect of our lives. This shouldn’t be taken as a value judgement - I’m not saying it’s a good thing. But the overwhelming utility, allure, and power of it is unstoppable. Artists worried about it making them irrelevant, concerns about distinguishing truth from fiction, impact on learning and development, etc. etc. etc.: all totally valid, but the discussion and planning both collectively and individually needs to start with the assumption that there is nothing anyone can do to stop it.
Taking the first example, if you’re an artist worried about AI replacing you, you need to start your thinking from a position of AI is absolutely going to make the “I can create an image” part of your value proposition worthless. Yes, a massive fraction of what you might have been able to get paid and recognized for in the past is now utterly irrelevant. Pleading with the public to not use AI, protesting, demanding legislation, praying - none of it will stop this reality from coming to be in your lifetime, probably within a few years at most.
I see a lot of comments and articles that don’t seem to understand this at all. They think there’s some way we can slow the adoption of AI in areas we think it’s harmful, or legislate a way into a desirable future, or whatever. They’re wrong. Whatever the future holds for us, it’s one where AI will be absolutely everywhere and massively disrupt society and industry as it exists today. Start your planning from that reality or you’re going to get blindsided.
This is kind of odd.
Bloom's paradox is well known and proven in education.
AI is the first thing that can positively personalize education and instruction and provide support to instructors.
The authors seem of limited technical literacy to know that you can just train and focus only on textbooks, instead of their explorations using general models and the pitfalls that they have. Not knowing this key difference affects some of the points being made.
The intersection of having a take on technology needs some semblance of digital and technical literacy involved in the paper to help acknowledge or navigate it, or it become a potential blind spot.
It takes legitimate concerns and ironically explores them in average ways, much like an llm returns average text for vague or incomplete questions.
Nonsense
There will however be a gigantic gulf between kids who use AI to learn vs those who use AI to aid learning
Objective review of Alpha school in Austin:
https://www.astralcodexten.com/p/your-review-alpha-school
> There will however be a gigantic gulf between kids who use AI to learn vs those who use AI to aid learning
yeah, but not the way you are thinking
you think the rich are going to abolish a traditional education for their kids and dump them in front of a prompt text box for 8 years
that'll just be for the poor and (formerly) middle-class kids
In the rosiest view, the rich give their children private tutors (and always have), and now the poor can give their children private tutors too, in the form of AIs. More realistically, what the poor get is something which looks superficially like a private tutor, yet instead of accelerating and deepening learning, it is one that allows the child to skip understanding entirely. Which, from a cynical point of view, suits the rich just fine...
This is absolutely not an objective review. The person who wrote this is a very particular type of person who Alpha School appeals strongly towards. I'm not saying anything in particular is wrong with the review, but calling it unbiased is incorrect.
What is the distinction between using "AI to learn" and using "AI to aid learning?"
Imagine a tutor that stays with you as long as you need for every concept of math, instead of the class moving on without you and that compounding over years.
Rather than 1 teacher for 30 students, 1 teacher can scale to 30 students to better address Bloom's 2 sigma problem, which discovered students in a 1:2 ratio with a tutor full time ended up in the 98% of students reliably.
LLMs are capable of delivering this outright, or providing serious inroads to it for those capable and willing to do the work beyond going through the motions.
https://en.wikipedia.org/wiki/Bloom's_2_sigma_problem (1984)
> Imagine a tutor that stays with you as long as you need for every concept of math, instead of the class moving on without you and that compounding over years.
I remember when I was at the uni, the topics I learned the best were the ones I put effort to study by myself at home. Having a tutor with me all the time will actually make me do the bare minimum as there always were other things to do and I would love to skip the hard parts and move forward.
The tutor is available for you all the time to learn.
If you read the article the other post shared, I think you might be surprised to find it's exactly what you are describing.
I don't think this answers the question in the comment you're replying to.
Calling the Alpha school "AI" or even "AI to aid learning" is a massive stretch. I've read that article and nothing in there says AI to me. Data collection and on-demand computer-based instruction, sure.
I don't disagree with your premise, but I don't think that article backs it up at all.
[dead]