AGI. The most powerful word in tech has no definition. Which means whoever writes it, wins.
There is no definition of what AGI (artificial general intelligence) actually means. There is no agreed definition of “general”. No agreed definition of “intelligent”. No agreed method how to measure either. No AI lab has published falsifiable AGI criteria they commit to being measured against.
It is quite likely this definitional “vacuum” is intentional because it serves whoever needs the term most at any given moment.
Every foundation AI lab depends on proximity to “AGI” for their next capital raise. “Near-AGI” justifies valuations while “very good narrow AI” does not. There is zero incentive for those who develop AI to formalize a definition for what AGI actually means.
And when there is no definition, there is no threshold. Every year regulators and investors argue over non-existing thresholds is a year in which AI labs develop without governance.
Without a threshold: undisciplined capital inflow, regulatory forbearance, talent magnetism – all flowing toward a word no one has defined.
I believe two seemingly inconsistent statements to be true:
- Today’s top-tier models are generally intelligent (in most cases they act logically intelligently)
- But they do not possess general intelligence (they cannot distinguish what they know from what they make up)
The first statement is about the ability to process complex variables. The top models are already “raw” intelligent. Gemini 3.0 Deep Think can process complex variables, reason across domains, and demonstrate parity with elite specialists on hard problems.
The second point is the load-bearing wall that relativizes the first one. A scientists who fabricates data 13% of the time is not a bad scientist but a fraud. Currently we allow AI a tolerance we would never extend to humans.
As long as hallucinations persist, you cannot rely on even the most advanced reasoning capabilities. And without reliability, there is no autonomous deployment, no liability transfer, no enterprise-grade trust.
I do believe this friction is temporary. Next-generation model architectures will reduce hallucination rates below human error rates, even though this may require fundamentally different approaches. Models that help humans solve frontier physics problems today will, in 18-36mo, do so with persistent memory, tool use across systems, and without hallucinations.
But will such systems be declared AGI?
My best guess is “No”. We will lift the thresholds. Being able to answer hard problems >99,99% of humans cannot answer will not be enough. We will define “general” to also include intuition, embodied judgment, multi-decade research agendas. This way, the term will always remain 2-3 years away. “We achieved AGI” might end the fundraising narrative, no one holding equity wants that sentence to be spoken out loud.
I think it is best to ignore the AGI debate entirely.

Leave a Reply