Stop Worshipping AGI
By Charlie G. Peterson | greg report 2027
By Charlie G. Peterson | greg report 2027
AGI has started to sound like one of those old gold rush words.
Everybody says it with a straight face. Half the room means one thing, the other half means something else, and somewhere in the back a vendor is selling shovels. A new paper from Judah Goldfeder, Philippe Wyder, Yann LeCun, and Ravid Shwartz-Ziv does something useful with that mess. It puts the word on the table, takes a wrench to it, and asks whether the field has been chasing the wrong target all along. The paper was posted on arXiv on February 27, 2026, and its answer is blunt: stop centering AGI, stop treating humans as the model of “general” intelligence, and start building toward what the authors call Superhuman Adaptable Intelligence, or SAI. (arXiv)
Range of learnable tasks matters. Skill transfer matters. Results matter. The authors explicitly define SAI as intelligence that can learn to exceed humans at anything important that we can do, and fill skill gaps where humans are incapable.
That shift matters more than it sounds. “General intelligence” has become a kind of fog machine in Ai discourse. It fills the room, makes everyone look dramatic, and leaves a wet film on the furniture. The authors argue that humans are not general in any meaningful sense. We are highly capable, yes, but across a narrow and survival-shaped range. We walk, plan, improvise, read faces, learn tools, and build institutions. Ask us to calculate like a symbolic engine, search like a database, or hold a million pages in active reach, and the myth falls apart fast. The paper’s core claim is that calling human intelligence “general” is a flattering mistake, one that has dragged Ai language into a ditch. (arXiv)
This lands because it cuts against a habit the field never quite shook. For years, “general” has been treated like a summit flag. Build one system that does everything. One architecture to rule the lab, the browser tab, the warehouse floor, maybe the laundry room if you are feeling cinematic. Goldfeder and his coauthors say that framing is weak on its own terms. AGI has no stable definition across industry and academia, they argue, and many versions of it are either too vague to measure, too human-centered to be useful, or too broad to guide real engineering decisions. That is not a small complaint. That is a complaint about the field’s favorite bumper sticker. (arXiv)
Their replacement is more grounded and, frankly, more dangerous in the practical sense. SAI asks a different question: not whether a machine looks “general,” but whether it can adapt fast enough to become excellent at new, important tasks, including tasks humans cannot do at all. That sounds colder because it is. It shifts the conversation away from flattering metaphors about machine minds and toward performance under pressure.
Time to learn matters.
Range of learnable tasks matters. Skill transfer matters. Results matter. The authors explicitly define SAI as intelligence that can learn to exceed humans at anything important that we can do, and fill skill gaps where humans are incapable. That is a much sharper blade than AGI, and it cuts closer to the bone of what businesses, governments, and militaries will actually pay for. (arXiv)
There is another reason this paper matters. It quietly demotes the all-purpose chatbot fantasy. The authors point toward self-supervised learning and world models as promising paths because those approaches aim at adaptable internal representations of the world, not just next-token fluency dressed in a nice shirt. That matters if you care about systems that have to operate beyond canned demos, where the air smells like hot plastic and dust, the floor is uneven, the data is incomplete, and no one gets extra points for sounding smooth. The paper does not deny the value of broad models. It does something more interesting. It suggests that a monoculture of broad models may be a dead end if adaptability is the real prize. (arXiv)
You can feel Yann LeCun’s fingerprints here, not just in the technical direction, but in the impatience with bloated mythology. That is part of why the paper is worth taking seriously even as a position paper rather than an experimental result. It is trying to clean the workbench before more money gets thrown at slogans. And the timing is right. Right now, the market is rewarding systems that do specific things very well: coding assistance, protein modeling, logistics optimization, customer support, medical triage, industrial vision. Nobody on a factory line is asking whether a model is “general.” They are asking whether it can adapt to a new defect pattern before second shift starts. Different question. Much better question.
I think this paper will irritate people for exactly the right reasons. It punctures a prestige term. It tells the field that “general” may be a vanity metric with better branding than substance. It also forces a harder public conversation. If SAI becomes the more honest goal, then the stakes become less philosophical and more immediate. Which sectors get these systems first. Who controls the adaptation loop. Which institutions grow dependent on machines that learn faster than their own staff. Who gets priced out. Who gets replaced. Who gets lulled by a pleasant interface while the real leverage moves elsewhere. None of that is abstract. It is procurement, labor, power, and time.
The paper does not settle those questions. It does something I respect more. It strips away a misleading label and hands us a better one, rougher and less romantic, but far closer to the steel of what is being built. AGI was always a word people could hide inside. SAI leaves fewer places to hide. (arXiv)
Sources
Goldfeder, Wyder, LeCun, Shwartz-Ziv, “AI Must Embrace Specialization via Superhuman Adaptable Intelligence,” arXiv, February 27, 2026. (arXiv)
MIT CSAIL, “Machines that self-adapt to new tasks without re-training,” for background on self-supervised learning and world-model-inspired adaptation. (csail.mit.edu)


