The AI guys meet national security
A recent Silicon Valley manifesto has some disturbing features
Last week, a former OpenAI employee called Leopold Aschenbrenner published a 150+ page essay called Situational Awareness in which he claims that by 2030 humanity will develop “artificial general intelligence” and then “superintelligence” shortly afterwards. Artificial general intelligence, or AGI, is an industry term for an AI program that can do everything a human can do, and superintelligence would mean, according to Aschenbrenner, “AI systems vastly smarter than humans, capable of novel, creative, complicated behavior we couldn’t even begin to understand”.
Much of Aschenbrenner’s essay is a recapitulation of the old idea of the “technological singularity”, a hypothetical point at which technological progress becomes so rapid that society becomes unrecognizable in a short period of time. As in so many other variations of this argument, the putative agent of this transformation would be AIs that were able to code even smarter AIs, which could then code even smarter AIs, and so on. In theory, this could lead to an “intelligence explosion” in a short period of time, and all of those superintelligent AIs would then set about revolutionizing every field of knowledge, producing the singularity. Humans would be left looking in from the outside and hoping they still had a place in whatever world the AIs created.
The overwhelming likelihood is that Aschenbrenner is not correct and that the singularity is not going to happen by 2030, or at all. Nor are these the aspects of his essay the most interesting. Instead, what I found worth reading about Aschenbrenner’s essay was that he makes an attempt to think through what advances in AI might mean for U.S. and international politics over the long term. Given that Aschenbrenner was until recently a part of the OpenAI team dedicated to thinking through these long-term problems, his writing provides an opportunity for examining how at least one part of the AI industry currently thinks about its place in the world. I found it disturbing reading.1
Mr. Aschenbrenner goes to Washington
Let’s break down Aschenbrenner’s argument into simple points. It goes something like this:
Superintelligence is possible with existing or foreseeable technological breakthroughs;
Much of the work of getting there is a matter of scale - throwing more computing resources at the problem, which will require trillions of dollars of capital investment;
This will likely require the government to coordinate or finance (I wrote about this before);
Once governments realize that superintelligence is not just hype, they will anyway seize control of the process and begin a new arms race, because being the first nation to develop superintelligence means dominating the world;
The “free world” needs to win this arms race in order to have a decisive edge over China, which will be trying to develop its own superintelligence in order to dominate the world and plunge it into an authoritarian hellscape;
Superintelligence may act in such alien ways that it dooms us all along the way, but we can probably handle that.
There’s a lot going on here.
The part of all of this that I am least qualified to comment on is the actual technological limits of the current wave of AI research. I do know that researchers in other paradigms of AI research have promised AGI before and failed to deliver it. I also know that large language models (LLMs) have produced results which seem like magic. Where exactly its wall is, I don’t know - and if he’s being honest, Aschenbrenner doesn’t either.
But there are some parts of this that I feel qualified to comment on, because they touch on matters of U.S. politics and national security.
The problem really starts to set in during the part of the essay called “Racing for the trillion dollar cluster”, which predicts that “the most extraordinary techno-capital acceleration” will lead trillions of dollars to be invested in building the physical infrastructure of AI in the coming years. This is crucial for Aschenbrenner’s vision. Despite seeming disembodied, LLMs are actually reliant on a lot of physical infrastructure - data centers, electricity generation, and computing power for training. The training of ChatGPT 4 reportedly cost $100m and its monthly operation consumes as much power as tens of thousands of households. By some estimates, AI could consume just shy of 5% of global electricity by 2030.
And those are extremely conservative estimates compared to Aschenbrenner’s. He claims that as AI begins to make its creators trillions of dollars, they will plow this back into new physical infrastructure. They will create individual training clusters costing $1tn and drive an increase in U.S. electricity generation and consumption on the order of tens of percent. He was even kind enough to produce a stylized image of the “trillion dollar cluster”, generated by (what else?) AI.
Something on this scale would inevitably involve politics - it would require permits, and environmental approval, and to avoid antitrust and political concerns. In hand-waving all of this away, it seems like Aschenbrenner is imagining a political singularity which is just as fantastical as his technological one. His assumption seems to be that by 2027 or so, the potential of AI will be so obvious and great that we’ll transition to some new realm in which the current rules of America’s highly-polarized politics and cumbersome regulatory state do not apply. This doesn’t seem to me to be very likely, or desirable.
A good example is the matter of the environment. Aschenbrenner doesn’t even throw anyone a bone by saying that the massive increase in U.S. electricity production which his plan foresees could come from renewables (can’t the AI design a new solar panel or something?) but instead sees it coming from natural gas, which he points out the U.S. has in abundance. And while he acknowledges that there might be some objections to increasing U.S. energy production by tens of per cent in the space of five years through increased fossil fuel extraction, he waves them away as “well-intentioned but rigid climate commitments”. The U.S. doesn’t have the luxury of sticking to these commitments, he says, because otherwise the data centers will be built in the Middle East, where the rulers of petrostates won’t have any problems with making abundant energy available.
What’s left unsaid, but I assume implied, is that cooking the planet won’t matter in the short-term because once the singularity happens, the AI will just solve the problem anyway - if it’s not busy killing us all, that is. So in effect, American citizens are being asked to tear down all political, legal and regulatory obstacles to Silicon Valley getting what it wants, whatever the consequences, on the promise that once the AI guys have developed superintelligence, it’ll magically make everything right again.
The national security imperative
If this doesn’t seem like a great deal, then Aschenbrenner has an answer ready: it’s better than the alternative, which is China ruling the world. As someone who really doesn’t want China to rule the world, I am sympathetic to this argument. But I also find such a conventional idea (the Reds are coming!) a little strange coming from someone who is saying that we’re only years away from a radically unrecognizable world.
At the core of a lot of debates around advanced AI is the concept of “alignment” - basically, can the AI be made to act in ways which agree with our values. If we truly do create a superintelligence which was itself coded by an earlier generation of superintelligences, it seems likely that their modes of thought and actions will be completely alien to us. For instance, if they don’t value human life, they might decide to solve global warming by killing every human, on the basis that we generate heat.
For the people who believe that they are in the business of creating such superintelligences, the idea of “alignment” performs a useful psychological crutch - it tells them that they will be able to figure out how to control these intelligences and direct them towards desirable goals. Aschenbrenner is incredibly hand-wavy about the industry’s ability to solve this problem, but basic common sense tell us that a true superintelligence would not be possible for us to control. Even if it were, the singularity is supposed to produce a world so radically different to the one we now live in that it would be difficult for us to imagine what we would even value in such a world.
For me, this chain of logic sits uneasily with the idea that what we really have to fear once we’re through the looking glass is… China. Why would superintelligences care about nationalities? Would they, once created, really act to advance the interests of one ideological system or country? Would the world they created even have such concepts anymore? We might try to code them to, of course, but it seems like such an approach would be doomed to failure for the same reason that any type of alignment seems out of reach. And are we actually going to try to code them to believe that China, or any other nation, is the enemy? What consequences might that have once the thing comes online and gets the nuclear codes?
Back in the real world
You don’t have to believe in singularities, be they technological or political, to see why any of this matters. It may be unlikely that the AI industry is on the path to superintelligence, but it sure is creating something - something which is likely to have large economic, political, and international consequences. The worldview of the people doing the creating matters.
Aschenbrenner’s essay seems to lead us to expect that this process of creation is going to proceed with some basic assumptions in mind. The first is that AI is so important and potentially powerful that we ought to sacrifice other societal goals - climate change, democracy, everything else - in order to facilitate it. We must let immensely powerful and wealthy people do what they tell us they think is best for humanity, even as those very same people get even more fantastically wealthy and powerful (and also tell us that what they’re doing might be the end of humanity.)
Secondly, if we question the need to do it, we can expect to be accused not just of being luddites, but also traitors - people who don’t realize that the unfettered development of AI is necessary in order to ensure the continued survival and success of democracy, freedom, and the American way. We must accept that the United States and China (and, presumably, any other non-democratic country) are destined to be implacable foes locked in a death struggle. The development of “superintelligence” by U.S. rivals will doom humanity, and presumably any means must be taken to stop it (Aschenbrenner helpfully predicts that whichever country achieves superintelligence first will be immune to nuclear attack, which I guess rules in a first strike).
Thirdly, the people developing AI seem to have tunnel vision - a disregard for all other values, be they political, legal or cultural, which might stand in the way of their vision (ironic, given they’re the people who are also supposed to ensure “alignment”). They’re not that knowledgeable about or interested in the constraints of the political system, or of international relations, because these things are ultimately destined to be swept away. What they’re working on is too important to be constrained by them.
Such people sound extremely dangerous to me, superintelligence or no.
There is, perhaps, some snark and certainly lots of criticism in what follows. So I just want to make clear that Aschenbrenner seems to be a sincere and conscientious individual who is trying to help educate the world about what he sees as a dire threat. He also seems to have suffered professional consequences for doing this. So I take my hat off to him, even while I think it’s important to think critically about his vision.