Couldn't agree more. This piece totally nails why Bostrom's superintelligence 'singleton' idea feels a bit off, especially when you think about how nuclear deterrence played out. I mean, with the sheer complexity and the distributed nature of AI development today, it's hard to imagine a single project maintaining that kind of tech leade without some serious counter-strategies emerging super fast.
The deeper reason a “decisive strategic advantage” is unlikely isn’t just diffusion or deterrence, it’s that intelligence doesn’t dissolve coordination constraints.
Even superintelligence still has to operate through institutions, supply chains, command structures, and legitimacy. Power scales with organized coherence, not raw cognition.
Brodie survives Bostrom because coordination, not intelligence, is the true scarce variable.
Interesting article, thank you. As I mentioned in my restack, I think diffusion will indeed be slower for a number of reasons (mostly to do with high scale software being restrained by the rules, norms, and material conditions of the real world).
One other consideration is that a decisive advantage won't accrue to, or even primarily to, nations. Would your analysis be different if - rather than the nation - the focus of your article was the multinational corporation (or the shareholders of the subset of companies that benefit)?
I find it curious that this post does not seem to engage much with "what is the ASI doing in the story for DSA?" (I haven't read the relevant portion of Superintelligence in awhile and I'm basing this comment more on my general understanding of the field). The story is not "ASI --> DSA". The story is "ASI --> crazy new tech, etc. --> DSA". That etc. could include:
- technologies that specifically undermine nuclear deterrence, for instance by credibly threatening to destroy all of a country's nuclear forces, or by disrupting command and control.
- technologies that are sufficiently hard to defend against, such as advanced nanotechnology (the offense-defense argument should be about particular technologies that are unlocked by ASI, not the ASI itself).
Many of the points in this post seem directionally correct, but they don't seem to strike at the cruxy-to-me claim: "whoever gets [advanced nanotechnology] first, if they are power seeking, gets a DSA; on the current trajectory, getting ASI first would unlock advanced nanotech first, so ASI --> nanotech --> DSA". I think there are probably some other technologies that could replace nano tech in this statement. I also think that the "power seeking" point is very loadbearing and is a key reason why one might not expect DSA to happen in the real world.
I agree Brodie makes a good case diffusion dynamics will dominate. But *if* somehow he turns out to be wrong about diffusion and one superpower gains a workable window of considerable across-the-board lead, there is one thing it might be able to do to seize lots of enduring value for itself while plausibly avoiding triggering catastrophic nuclear war: let other nations live free and in peace on earth, but be first to sieze the rest of the solar system for itself... How many leaders would start nuclear armageddon on Earth for the sake of Mars?
Couldn't agree more. This piece totally nails why Bostrom's superintelligence 'singleton' idea feels a bit off, especially when you think about how nuclear deterrence played out. I mean, with the sheer complexity and the distributed nature of AI development today, it's hard to imagine a single project maintaining that kind of tech leade without some serious counter-strategies emerging super fast.
Strong piece.
The deeper reason a “decisive strategic advantage” is unlikely isn’t just diffusion or deterrence, it’s that intelligence doesn’t dissolve coordination constraints.
Even superintelligence still has to operate through institutions, supply chains, command structures, and legitimacy. Power scales with organized coherence, not raw cognition.
Brodie survives Bostrom because coordination, not intelligence, is the true scarce variable.
Interesting article, thank you. As I mentioned in my restack, I think diffusion will indeed be slower for a number of reasons (mostly to do with high scale software being restrained by the rules, norms, and material conditions of the real world).
One other consideration is that a decisive advantage won't accrue to, or even primarily to, nations. Would your analysis be different if - rather than the nation - the focus of your article was the multinational corporation (or the shareholders of the subset of companies that benefit)?
I find it curious that this post does not seem to engage much with "what is the ASI doing in the story for DSA?" (I haven't read the relevant portion of Superintelligence in awhile and I'm basing this comment more on my general understanding of the field). The story is not "ASI --> DSA". The story is "ASI --> crazy new tech, etc. --> DSA". That etc. could include:
- technologies that specifically undermine nuclear deterrence, for instance by credibly threatening to destroy all of a country's nuclear forces, or by disrupting command and control.
- technologies that are sufficiently hard to defend against, such as advanced nanotechnology (the offense-defense argument should be about particular technologies that are unlocked by ASI, not the ASI itself).
Many of the points in this post seem directionally correct, but they don't seem to strike at the cruxy-to-me claim: "whoever gets [advanced nanotechnology] first, if they are power seeking, gets a DSA; on the current trajectory, getting ASI first would unlock advanced nanotech first, so ASI --> nanotech --> DSA". I think there are probably some other technologies that could replace nano tech in this statement. I also think that the "power seeking" point is very loadbearing and is a key reason why one might not expect DSA to happen in the real world.
I agree Brodie makes a good case diffusion dynamics will dominate. But *if* somehow he turns out to be wrong about diffusion and one superpower gains a workable window of considerable across-the-board lead, there is one thing it might be able to do to seize lots of enduring value for itself while plausibly avoiding triggering catastrophic nuclear war: let other nations live free and in peace on earth, but be first to sieze the rest of the solar system for itself... How many leaders would start nuclear armageddon on Earth for the sake of Mars?