Don’t Wait for AGI Day
AGI isn’t an event with a clear before and after. And that’s a good thing.
A few years ago I led a workshop with 20 members of a large international organization that focuses on mitigating the impacts of armed conflicts. The participants worked in policy, law, and information security. Our goal was to map activities of the organization and then to look at external factors that might impact them in the next 10 years. We identified AGI as one of several “critical uncertainties”.
On other topics, like geopolitical fragmentation, we had a very robust discussion. But when it came to AGI, participants seemingly drew a blank. The passivity was remarkable for an otherwise agentic group. It seemed that many could only conceive of two options: either AGI solves all of our problems or we all end up dead.
If this is how we approach the AGI transition, I expect us to end up in the second bucket.
Unfortunately, this mother of all dichotomies, remains a popular line of thinking in AI. Just this week the leading AI philosopher Nick Bostrom published a new paper where he framed AGI as a one-time go vs. no-go decision. After this one decision we’re either lucky and our life expectancy jumps from 40 years to 1400 years. Or, we’re unlucky, and we’re all dead.
In contrast, I don’t expect AGI to be a single event that happens to us with a sharp before and after. This is neither “like Russian roulette” nor “like undergoing a risky surgery”. The rise of AGI is a more iterative, distributed, and hopefully corrigible process. Nor is the AGI transition something that just happens to us. Humanity has agency in shaping its outcome.
Given that I don’t view AGI as a single event, I tend to avoid the terms “pre-AGI” and “post-AGI” in my work. My “AGI economy” series might be called “post-AGI economics”, but it’s not. This blog’s mission statement of preparing society for “a world with billions of AGIs” might be called “post-AGI studies”, but it’s not.
And maybe, we should just retire this terminology in general. Those that believe in a “slow” take-off, like me, might find that “post-AGI” is full of definitional ambiguity and kind of a misnomer for the rise of AGI. Those that believe in a fast take-off very soon after we reach their definition of AGI, like Nick Bostrom, might find “post-AGI” more meaningful. However, they too might be better off stating their take-off assumptions explicitly rather than implicitly including them in their definition of “post-AGI”.
1. AGI is here to stay
The prefix “post” is Latin for “after”. In general the prefix denotes the decline in importance or complete cessation of the phenomena that follows after the prefix.
The post-colonial period denotes the era after colonialism has ended.
The post-Soviet period denotes the era after the collapse of the Soviet Union.
A post-agrarian society is one in which most humans are not employed in the agrarian sector anymore
A post-industrial society refers to a society in which most humans are not employed in the industrial sector anymore
A post-labor economy can be defined as one in which the labor share of income is below 50%
Post-menopause means menstruation is over
Post-partum means birth is over
Post-mortem means someone or something has already died
Yet, what we are facing is the exact opposite. It’s the integration of AGI into our economy and society, not its transition out of it. From that viewpoint, talking about a “post-AGI society” makes as much sense to me as talking about a post-electricity society, a post-computer society, or a post-AI society.
2. We will never agree on a specific AGI day
So why do people use terms like post-AGI society when they really mean an AGI-driven society? This goes back to the idea that there is a specific arrival date of the first AGI. One day someone declares “this is the first AGI”, everyone agrees, and we can neatly separate our timeline into before the arrival of AGI (B.A.) and after the arrival of AGI (A.A.).
However, the way I understand AGI there will never be an unambiguous, unqualified agreement on when the first AGI arrived. There are dozens of different definitions of AGI. By some definitions we already have AGI, by other definitions we’re still more than a decade away from AGI.
And if my AGI is not your AGI, then my post-AGI is not your post-AGI. Are we in a post-AGI society when Peter Norvig declares AGI? When Nature declares AGI? When Gary Marcus eventually declares AGI?
The companies building AGI have addressed this definitional ambiguity by switching to operational frameworks for different “levels of AGI”. Maybe we could match the “levels of AGI” energy and talk about “levels of post-AGI”. However, that starts to sound pretty convoluted.
3. It’s worth distinguishing between AGI and take-off speed
Let’s assume everyone could agree on the same AGI test. The first AI model to reach the performance threshold on this universally accepted AGI benchmark is released on December 7, 2026. Will the literal “day after AGI” be very different from the day before? If it’s just another update that pushes the best model slightly above the previous best score, I don’t think this creates a very different world. Not to mention that the impacts of a technology don’t just depend on innovation but on lagging factors, such as societal diffusion and adaptation.
In that sense, a sharp division between pre- and post-AGI may overstate the impact of “AGI day”. For most AGI definitions the short-term lived experience of reaching that threshold will be Sam Altman’s “AGI kind of went wooshing by”. In contrast, it may understate the dynamism “post-AGI”. There is no magic new static equilibrium after the first AGI. There will be more and smarter AGIs every single year for a long, long while.
My sense is that some like Nick Bostrom use the term “AGI” almost interchangeably with “fast take-off” or “intelligence explosion”. As a reminder, fast here doesn’t correspond to human intuitions of what’s fast. A fast take-off as defined by Bostrom takes on the order of “minutes, hours, or days”.1 You release the AGI model, it takes over the world, and after that humanity has no agency anymore. We can either concern ourselves with finding meaning in “deep utopia” or we’re all dead.
Given different assumptions about what happens when we reach AGI, it would be less confusing to me if Bostrom would use terms like pre-fast-take-off and post-fast-take-off rather than pre-AGI and post-AGI. As Ajeya Cotra argues, maybe we should focus less energy on arguing over AGI timelines and more on disagreements about take-off speeds.
Don’t pray for AGI day, shape the AGI transition
In some ways the non-use of “post-AGI” is nitpicky. We do have more urgent challenges than terminology. However, in another way it touches upon an important underlying disagreement. Someone who thinks a fast-take-off is highly likely might scoff at trying to figure out “mundane” aspects of the AGI transition like tax systems or pensions: “The survival of our species is at stake!”
I agree on the stakes in the long-run. However, I’d also argue that a fast take-off is unlikely and to the degree that it is possible it likely poses unacceptable risks. The world is much safer with iterative launches and with institutional checks and balances that maintain corrigibility over time. To quote Stephen Casper’s Reframing AI Safety as a Neverending Institutional Challenge: “Unless we believe in an AI messiah, we can expect the fight for AI safety to be a neverending, unsexy struggle. This underscores a need to build resilient institutions that place effective checks and balances on AI and which can bounce back from disruptions.”
To illustrate what I mean with a concrete example: Yes, we can test the personal vibes of a universal basic income at a sample size of <1000 at Point A today, and, yes, we can imagine that at some Point B in the “post-AGI future” the global economy will be 50 to 75 times bigger, which would be big enough for every current human to have a universal basic income at a third of Switzerland’s current GDP per capita based on philanthropic donations from trillionaires.
However, as in other domains, I don’t expect the transition of the economic system from A to B to be a blip.
This transition is something that we should try to shape.
It’s kind of the entire ballgame.
Thanks to Emma McAleavy, Mike Riggs & Alexander Kustov for valuable feedback on a draft of this essay. All opinions and mistakes are mine
Nick Bostrom. (2014). Superintelligence: Paths, Dangers, & Strategies. p. 77


