I wrote on education a couple months back and have since evolved my thoughts to be more prescriptive. Too much “here’s what’s happening” without enough “here’s what you should do about it,” and I want to avoid virtue signaling.
If I had to compress everything into three words: preserve your curiosity.
That sounds like a platitude. It’s not. It starts with a question people ask all the time but rarely dig into.
Where did our model of education come from?
Every economy builds the kind of school it needs.
The best version of education has always looked more like a PhD than a classroom: one person, one real question, a mentor if you’re lucky, and enough obsession to keep going until you hit the frontier of what is known. You can see it in Socrates, in the Oxford tutorial, in the gentleman scientist following a question for years because he couldn’t let it go. That mode of learning was never fictional. It was just reserved for people who could afford it. Everyone else got the standardized version.
The standardized version most of us grew up in - one teacher, thirty students, bells, rows, age-graded classes - comes from late 18th-century Prussia. Frederick the Great formalized compulsory primary education in 1763, and the system accelerated after Napoleon embarrassed Prussia in 1806. They needed obedient soldiers and literate bureaucrats. Mass schooling was built to produce reliable people at scale.
Industrial capitalism knew exactly what to do with that model. When Horace Mann brought it to Massachusetts in the 1840s, and when industrialists like Rockefeller and Carnegie championed its spread, the classroom became a way to produce compliant, punctual, literate workers at scale. Alvin Toffler called this the “covert curriculum” of mass education: punctuality, obedience, repetitive work. The form stuck because the economy kept rewarding the kind of person it produced.
This is where Marx helps. His language for it was base and superstructure: the economic base comes first, and the institutions built on top of it learn to serve it. Education is one of those institutions. Althusser and Bowles and Gintis pushed the argument further, showing how schools reproduce the kind of hierarchy the economy needs. The point is simpler than the vocabulary: education does not float above society. It trains for the kind of mind the economy knows how to use.
Once you see that, history looks less like a sequence of reforms and more like a pattern. Feudal societies educated aristocrats. Industrial societies built mass schools. The knowledge economy turned education into credentialism, the degree as signal and the university as gatekeeper. Dan Shipper calls the next phase an “allocation economy.” If that’s right, then the real question is not whether education will change. It has to. The question is what kind of mind the next economy will demand.
Graduating to AGI
Sam Altman said at TreeHacks a few weeks ago that if you’re a sophomore now, you will graduate into a world with AGI. Whether or not you take that literally depends on your definition of AGI, but the directional claim is hard to argue with. A degree is a strange thing to commit to when the economy it was designed for may not survive the degree.
The structural problem is simple. AI is getting cheaper at doing work faster than humans are getting cheaper at checking it. A recent paper from Catalini, Hui, and Wu called “Some Simple Economics of AGI” gives the formal version: the cost to automate a task falls with compute, while the cost to verify that the task was done correctly remains biologically bottlenecked. Execution scales. Verification doesn’t.
Once you see that, the educational problem becomes obvious. The people we need most are the exact people the new economy stops training first. Human verification still matters because LLMs hallucinate at some irreducible rate. But entry-level work gets automated first, which means the apprenticeship pipeline that used to produce skilled human judgment starts collapsing right when that judgment becomes most valuable.
The Catalini paper gives names to the two halves of this problem. The Codifier’s Curse is what happens when expert verification becomes training data. Every time you specify what good looks like clearly enough to check it, you make it easier for the machine to learn. The Missing Junior Loop is what happens when the low-level work that trained people into expertise disappears. We still need human oversight. We are just removing the path that used to create it.
You can already see this in software. Anything that can be verified cleanly gets pulled into the training loop. Compiler feedback, tests, formal environments - wherever the reward is legible, search can scale against it. From my own work pushing the boundaries of search in RLVR, that’s the dynamic: search compresses toward successful reasoning, then supervised learning broadens what the model can do with those traces. The details matter less than the direction. Every verifiable task is a candidate for absorption.
That means the human edge shifts upward and outward. Away from the parts of work that can be checked cleanly, and toward judgment, taste, and problem selection - the part where you decide what matters, what to build, what to investigate, and what counts as a good answer in the first place. That’s not a skill you learn by passively moving through a classroom.
The paper sketches two futures from here. The Hollow Path is the default one: automation destroys the junior pipeline, senior expertise ages out, and measured output rises while real understanding thins underneath it. The Augmented Path is the harder one: we deliberately create the friction the economy no longer provides naturally, using AI to generate dense, adversarial, out-of-distribution scenarios that compress the path to real judgment instead of replacing it.
This isn’t just a policy problem for schools and firms. It’s personal, because it changes what learning should feel like while you’re still inside the system.
The Arena and the Gates
Judgment is built in the arena, not the classroom. The next educational question is who gets into the arena at all.
Many of the arenas where real judgment forms are gatekept. Pharma, biotech, law, medicine - fields where you only learn the hard part by touching reality, but where touching reality is controlled by credentials, institutions, and licenses. The degree matters because it is not just a signal. It is often the gate to the place where contextual understanding gets built.
That was the logic of the knowledge economy. School sorted people. Credentials rationed access. Institutions protected the gate because their prestige and revenue depended on controlling entry into the arenas where valuable judgment got made. This is what I mean by superstructure capture: institutions defend the base that made them powerful.
But if the economy now needs more judgment and less legible credentialed intelligence, the gate starts looking less natural. It starts looking historical.
That is why the cracks are showing in exactly the places you’d expect. The Thiel Fellowship pays people not to go to college, a direct bet against credentialism. Y Combinator values building over degrees. Alpha School uses AI to compress academic learning to two hours a day, freeing time for real projects. Companies are dropping degree requirements. AI gives individuals leverage that used to require institutional backing. The walls do not fall because people become philosophically enlightened. They fall when the economy no longer needs them the way it used to.
The arena is still unevenly accessible and some gates will remain for good reasons. But the direction is clear: if judgment matters more, access to judgment-building environments matters more too.
The Convergence
Education optimized for measurable intelligence right before measurable intelligence became cheap.
That is the convergence. In every era, education has tried to produce the kind of intelligence the economy rewarded. In industrial society that meant literacy, numeracy, obedience. In the knowledge economy it meant expertise, specialization, credentialed knowledge. But the kind of intelligence that can be measured, tested, and signaled is exactly the kind AI commodifies first.
Ilya Sutskever said it more bluntly than most educators ever will: “If you value intelligence above all other human qualities, you’re gonna have a bad time.” The person helping build intelligence as a commodity is telling you that intelligence, at least in its measurable form, is no longer the safe bet. The thing school spent decades optimizing for is becoming abundant.
That is why most adaptation advice feels shallow. “Upskill for AI.” “Learn prompt engineering.” It sounds new, but it keeps the old logic intact. It still treats education as a way to acquire the currently legible skill and cash it in later. It still assumes the point is to become good at whatever the market can already name.
Real education starts earlier than that. It starts when a question matters to you enough that you stop needing to be assigned it. The deepest form of learning has always looked like that - Socrates in the agora, the Oxford tutorial, the PhD student at the frontier of a problem, the gentleman scientist following a question because he cannot leave it alone. What used to make that mode rare was class. What makes it newly practical is that the economy now rewards the kind of depth it produces.
When measurable skill gets automated, the durable edge shifts to the person who goes deeper by choice. The genuinely curious hit more out-of-distribution cases because they stay with a problem longer. They build judgment, taste, and contextual understanding because they keep running into reality after the rubric runs out. The instrumentally motivated optimize for legible, in-distribution competence, which is exactly the part of intelligence automation attacks first.
“Work is play” isn’t a lifestyle slogan here. It’s an economic claim. Intrinsic motivation drives depth, and depth is where judgment forms. Naval Ravikant gets at this when he says all true learning is on the job. Taleb gets at the other side of it with the “Intellectual Yet Idiot”: the person who knows the abstractions but has never paid for being wrong.
The Missing Junior Loop creates a real problem though. If judgment comes from friction and apprenticeship, and apprenticeship is disappearing, where does the friction come from?
AI changes role here. You don’t use it to skip the arena. You use it to compress the arena. The machine becomes an adversarial tutor that generates edge cases, failures, and scenarios dense enough to break your mental models faster than ordinary experience would. Curiosity selects the frontier. AI compresses the path through it. You build the tacit intuition.
The arena and AI aren’t opposing forces. They are the same motion if you use them correctly. For the first time at scale, the thing education should have always been about and the thing the new economy actually rewards point in the same direction.
Be in a Constant PhD
The practical question is what to do while the old system is still here.
Credentials still matter in this lifetime. Bureaucratic inertia is real. The old superstructure does not collapse on command. If you’re in school, don’t drop out because you read an essay on the internet. That’s not what I’m arguing.
What I am arguing is that you should use the institution for leverage, not for direction. Use its labs, professors, networks, funding, peers, and credentials as tools in service of your questions. Do not let the institution decide which questions are worth your life.
Treat your undergrad like a PhD. Go deep on what genuinely interests you. Get to the frontier of a real problem. Use AI there, but use it correctly. The highest-leverage use of AI in education is not getting answers faster. It is making struggle denser. Let it generate the edge cases, failures, and adversarial scenarios that break your understanding and force you to rebuild it.
You don’t use AI to avoid difficulty. You use it to compress difficulty. Curiosity selects the frontier. AI compresses the path through it. Reality decides whether you were actually learning.
That is what people miss when they talk about judgment. Judgment is not a vibe. It is what forms when your ideas collide with consequences. You build it by doing real work, being wrong in contexts that matter, and staying with the problem long enough that the rubric stops helping. The person who keeps following a real question becomes harder to replace than the person who optimized for the credential attached to it.
And before you optimize too hard, ask whether your ambitions are even yours. Girard’s mimetic theory suggests most desire is borrowed. That’s another essay, but the warning belongs here. If your goals are inherited from the system around you, then even your ambition is downstream of someone else’s incentives.
The economic base will keep shifting. It always has. The only durable asset is the capacity to generate real questions, pursue them deeply, and let reality change you in the process.
For the first time, the thing education should have always been about is also the thing the economy rewards. Preserve your curiosity.