America Explained is a newsletter about American politics, foreign policy and history - and how they all tie together. Our subscribers make America Explained possible. Click below to update to a paid subscription in oder to help support our work and ensure that you never miss a post.
Like a lot of people, I’ve been transfixed by OpenAI and its flagship product ChatGPT ever since it launched roughly a year ago. This isn’t because I think that ChatGPT is on the verge of achieving sentience or even “artificial general intelligence” (AGI), a term which OpenAI defines as an entity that can perform at or above human level at economically significant tasks. Rather, it’s because ChatGPT made me realize that the computer programs we describe as “AI” don’t need to be sentient or AGI-level in order to turn society upside down. Whether it’s automating white-collar work or manipulating people through tailored propaganda (but I repeat myself), ChatGPT is already capable enough to produce social goods and harms. Even if the sum total of future AI research just refines and tailors these so-called “large language models” (of which ChatGPT is one), the implications will be large. But so would merely incremental advances which gave AI agents the ability to engage in autonomous planning and action.
In order to illustrate this, we can consider the example of a tiger. A tiger is threatening to humans not because of its general intelligence, which is vastly inferior to that of a human, but because it is highly specialized at performing certain actions, like chasing someone down and breaking their neck. Similarly, AI agents which were developed to perform some task like run internet scams, manipulate people into voting for one party over another, or catfish and blackmail someone on the internet would not have to possess AGI in order to do those things. They would just need to be specialized in understanding their victims via their digital footprint and manipulating them through language. They would be the tigers of the internet. They’d also likely be capable of taking many people’s jobs.
It’s notable that the people who are at the cutting edge of this research are awed by its potential - and worried. This was the lesson of a bizarre saga which unfolded over the past few weeks in which OpenAI’s CEO Sam Altman was first forced out of his job by the company’s board and then reinstated shortly afterwards. Although what exactly happened is still somewhat unclear, it seems as if the board felt that Altman was rushing forward to develop new technologies and unleash them on society much quicker than it was comfortable with. Altman ultimately managed to out-manoeuvre them by threatening to take his talents and technology to Microsoft, along with a huge portion of OpenAI’s staff, forcing the board to back down.
The whole point of OpenAI, which tech bros were extolling with awe on podcasts only six months ago, is that it was supposed to have a governance structure which would prevent this rapid rush to commercialization before the implications for society had been fully considered. OpenAI operates a “capped profit” structure in which the individuals working there will only ever receive a capped amount of monetary compensation, limiting the incentive for them to “move fast and break things”, as the Silicon Valley phrase goes. On top of that, it has - or had - a board made up of academics and non-profit types who were focused on the issue of AI safety rather than maximizing quick returns and impact.
But as a result of the last few weeks’s events, that board is gone, and Altman has become untouchable - all as OpenAI is rumored to be getting ready to unleash a new, even more powerful system on the world, the mysteriously named Q*.
The hype machine
It’s important to always be skeptical of the hype which surrounds new AI developments, not least because the field of AI has been through many cycles of hype and subsequent deflation over the decades. But because they’ve already produced one truly disruptive technology, it’s worth taking seriously the idea that OpenAI is going to make further strides which might be harmful to society, even if AGI remains out of reach.
Keep reading with a 7-day free trial
Subscribe to America Explained to keep reading this post and get 7 days of free access to the full post archives.