This is the first of a new occasional series on the newsletter which examines what a movie - recent or otherwise - has to say about America today. Subscribe to make sure you don’t miss the next. Also, please feel free to forward this email to anyone who might be interested.
I admit to being late to the game with this, but I presume by now that most people have seen ‘Oppenheimer’, last year’s award-winning movie about the father of the American atomic bomb. The movie is loosely based on the book American Prometheus by Kai Bird and Martin Sherwin, and it does a pretty good job at portraying both Robert Oppenheimer and the Manhattan Project more generally.
When the movie came out, a lot of people were very impressed by the parallels between the development of the atomic bomb and the contemporary development of large language model AIs like ChatGPT. I think this was, in retrospect, a little weird, if only because nuclear weapons can produce world-ending explosions and ChatGPT currently struggles to summarize the contents of the book American Prometheus for me.
But if you believe what the AI folks are saying, then there are some parallels. In both cases you have a group of really smart people working on a technology which they believe could do grave societal harm or even spell the end of the human race. Just like the American scientists racing against the Nazis to build a bomb, the AI people seem to be driven by the belief that now that the genie is out of the bottle, it’s vital that the first people to develop a new and dangerous technology are sane and decent folks who want to use it responsibly. Naturally, they think that they are those sane and decent folks, and that nobody else can be trusted. Some of them even claim that if they could put the genie back in the bottle, they would.
It’s fairly obvious why the heroic, tragic portrayal of Robert Oppenheimer by Cillian Walsh in last year’s movie appealed to people who think this way. Oppenheimer is a driven technologist, but he also wrestled with the moral consequences of what he was doing. Crucially, though, he still did build the bomb - and Lewis Strauss (the slippery guy going through confirmation hearings for Secretary of Commerce in one of the movie’s sub-plots) is right to say that Oppenheimer never publicly regretted the use of the bomb on Hiroshima and Nagasaki. Ultimately, if the tech bros want absolution, then the Oppenheimer movie provides it in the form of a fig leaf of moral respectability for a guy who ultimately got to build the thing he wanted to build, whatever the consequences were for the rest of us.
Another consequence of understanding the development of LLMs through the lens of ‘Oppenheimer’ is that it focuses our attention on very long range, very unlikely outcomes.
The technology which is powering ChatGPT and other LLMs is clearly going to produce something of social and economic consequence. But it seems at least possible that the harms it does to society will approximate those done by social media rather than those done by the atomic bomb. Social media companies have done enormous harm to people (particularly children) all over the world by producing addictive, psychologically damaging products which warp our sense of reality and our relationship to our fellow human beings. Some of the practical applications of LLMs in the pipeline - virtual friends, micro-targeted persuasion tools, etc. - could easily do the same thing. But we might miss that if all we’re thinking about - and regulating against - is the probably-not-impending robot apocalypse.
The decline of the public mega-project
Another way to think about the Manhattan Project is as a huge public works program.
At one point, the quest to build an atomic bomb was absorbing about 0.4% of America’s GDP. As depicted pretty well in the movie, it involved a broad mobilization of America’s scientific and technical talent towards a particular end. But even then it was still just a subset of the nation-wide mobilization underway to win World War II. At its peak during the war, defense spending as a percentage of GDP was about 40%. And then the period after the war saw further massive public investments in things like highway construction, benefits for returning veterans, and home construction.
Since then, advocates of public spending on a particular issue have often called for a “World War II-style” investment program. Progressive Democrats quite frequently mixed their metaphors during their 2018-19 push for a “Green New Deal” by in one breath comparing it to FDR’s domestic programs of the 1930s and in the next comparing it to World War II. But the appetite for this sort of spending hasn’t existed in a long time, even during the period of ultralow interest rates which followed the financial crisis of 2008-9. The infrastructure and climate bill which Biden signed into law in 2021 was the biggest in decades, but it didn’t match these earlier mega-projects.
On one hand, the reason why it was possible to mobilize huge investment for the Manhattan project might seem obvious: the United States was at war, and the project was building a really big bomb. But even though it was known that a team of Germans had achieved nuclear fission in 1939, it still took a long time for American scientists to get American policymakers interested. Even then, it wasn’t purely the military applications which interested them. They also thought nuclear fission would have many peaceful uses as well. At one point, FDR even moved to cut off atomic cooperation with the British because he wanted to ensure that American companies would have a monopoly on atomic secrets once the war was over. And he did this even though he knew that doing so would slow down the development of the bomb itself.1
And it did turn out to be true that atomic energy had many peaceful applications - “the silver lining of the mushroom cloud”, as some have called it. “Nucleonics” led to developments in energy generation, cancer treatment, and more. Although these peaceful spin-offs were certainly not the main point of the Manhattan Project, they allowed some of the involved scientists to keep looking at themselves in the mirror, and they showed that tackling major scientific and technological questions often leads to discoveries of broad use to society. The Apollo program of the 1960s did the same thing.
Perhaps nothing better illustrates the decline of public investment better than the fact that NASA now relies on the private sector to get payloads into space. Of course, the U.S. government still spends a lot of money, but in recent decades the giveaways have tended to come in the form of massive tax cuts and the “forever wars”. The latter have presumably led to some fantastic improvements in intelligence-gathering techniques, but they have also (as
wrote recently) “somehow managed to leave us without a defense industrial base”. This has become painfully apparent as the U.S. struggles to provide Ukraine with the means to defend itself without running down its stockpiles to dangerous levels.A Manhattan Project for AI?
All of this leads to the question of when and if the U.S. government is going to get into the LLM business.
If LLMs really are the game-changer that their proponents say they are, then it’s somewhat strange that the governments of the world are not taking more of an interest in seizing control of or at least trying to guide the development of this technology. That might take a number of forms, from surveillance and behind-the-scenes communication with the big companies (probably already happening) to actually trying to build an in-house federal LLM. The logic would be something like that of the Manhattan Project: if someone is going to do this, we’d better make sure that we’re first.
No doubt there are some tin-foil hat-wearers who think this is already happening, but I’m skeptical. As they scale upwards, LLMs are requiring increasing amounts of computing power, energy, and scientific expertise to build. There’s a tight market for all of these, but the expertise especially is a fairly easily traceable commodity. If large numbers of the world’s best AI minds - or even the second or third tier - started disappearing to some remote desert location in New Mexico, their peers would notice and start talking about it.
But that isn’t to say that it won’t happen one day. If LLMs turn out to have major applications in either the military domain or for basic research, it’s hard to imagine them remaining in private hands indefinitely. Many speculative scenarios about an AI-moulded future end with something like “the AI companies amass infinite wealth and make everyone else live on UBI”. But “the government takes over the AI and turns it towards some particular concept of the public good” seems just as likely to me.
This isn’t necessarily a better scenario. Governments the world over have a patchy record at best of deciding what’s in the public interest, and the perceived logic of interstate competition can be used to justify all sorts of destructive and reckless acts. Just look at the career of the atomic bomb, which was dropped twice, further tested in ways which harmed thousands afterwards, and nearly used to blow up the world many times since. As much as we sometimes like to think that public actors are more trustworthy than private ones, it’s pretty clear from history that in cases involving extreme concentrations of power and basic questions of values, this is not always the case.
But if the technology keeps getting better at the pace that its proponents say that it will, this question can’t be avoided forever. Even if the level of societal disruption and harm remains more at the social media level than the atomic bomb level, governments are going to be interested in channelling and controlling the ways that they impact society at home and abroad. ‘Oppenheimer’ is ultimately a pretty good starting point for imagining what that might look like, the compromises it will bring - and how it can all ultimately go terribly wrong.
The decision was later reversed after Winston Churchill kicked up a fuss, and FDR came to believe that the broader Anglo-American alliance might be threatened.