Pessimists have been warning for decades that robots are coming for our jobs. But when we picture this eventuality, we usually think about the displacement of low-skilled work. For instance, the prospect of self-driving cars raised concerns about sudden unemployment among cab and delivery truck drivers, not the executives and lawyers and accountants who work for their employers. Even the very term “robot” implies something primarily skilled at manipulating physical objects, not the artefacts of the mind that so-called “knowledge workers” are concerned with.
But a new product, currently available for free public use, suggests we may have had the situation exactly backwards. Despite years of hype, self-driving cars have still not arrived – manipulating physical reality consistently and safely is really hard. Knowledge workers, on the other hand, already have a new competitor to contend with, and they are not remotely ready for the implications.
The product in question is ChatGPT from OpenAI. ChatGPT is a large language model, a computer program trained through a large amount of textual data and human feedback to produce useful and human-like responses to queries. To be clear, ChatGPT is not sentient or even “intelligent” in the way that a human is. It works simply by predicting what the next word in a sentence should be based on what has already been said. But that simple tool has amazingly powerful results.
As I found out over the course of a day conversing with ChatGPT, the impact of this technology on people who work with ideas and words will be profound. In fact, in the course of answering queries related to my work as a historian and columnist, it quite simply blew me away. ChatGPT not only suggested new avenues of research on topics I thought I knew well but also offered improvements to my writing style (which, like any proud writer when faced with a skilled editor, I somewhat grumpily but eventually gratefully received).
One of the ways to really test the limits of ChatGPT is to ask it to do things that are, well, ridiculous. And lying in bed with a fever from covid last week, I did ask it to do some truly ridiculous things. Here are just a couple of examples:
And another:
As well as making my wife question my sanity, I think these are good examples of just how powerful and creative ChatGPT is. I’m fairly certain that nobody has ever written a thinkpiece comparing the War of 1812 to a colonoscopy or Andrew Jackson to an octopus - that is original creative work. And it’s ability to produce writing like this is completely endless. So you can add entertainment uses to professional ones. This technology might not be quite on the cusp of producing an Industrial Revolution-style disruption in human society, but a level of disruption at least akin to the introduction of the desktop computer or the mobile phone seems likely - and probably much more.
To be fair, ChatGPT also has obvious limitations. It has a tendency to go along with whatever the user claims, such as when I convinced it that a key event in World War I took place not on Wall Street but on Made-Up Street. It sometimes produces plausible sounding but incorrect answers, which makes it risky to use when researching topics you know little about. It’s currently an assistant and a sounding board for the researcher and writer, not a replacement. But merely in that more limited role, it is already capable of taking on some of the cognitive load of thinking through complex problems.
Eventually, large language models will become capable of even more – OpenAI’s next generation GPT-4, expected to be released soon, is already generating significant buzz. But even in its current form, ChatGPT poses questions to society for which there are no clear answers.
Take college education as an example. ChatGPT can output plausible-sounding essays on an enormous range of topics. In my testing, it was certainly sufficient to achieve a passing grade in the subjects I know well, at least at lower BA levels. The text it generated concerning nuanced academic debates exceeded the understanding shown by many students - even if the writing, as Dan Drezner has noted, is very bland. But while it also contains mistakes and omissions, so does the work of most students who receive a passing grade. This software isn’t going to be acing exams any time soon, but it could absolutely be passing them. Colleges have not even begun to grapple with what all of this means for ensuring that students are actually the authors of the work which allows them to earn their degree.
There’s also a risk that technology like ChatGPT could lead students – and society at large – to downgrade the value they place on education altogether. If a computer program can write an essay in seconds, we might be tempted to think, then what value is there in learning to write essays? The output of large language models can be so awe-inspiring, particularly to someone who is a relative neophyte in a subject themselves, that it seems pointless trying to best it.
If adopted widely enough, this viewpoint could ultimately become dangerous to human progress. Programs like ChatGPT do not actually generate new knowledge, but rather leech off the knowledge already created by generations of human beings. The instinct to challenge the program and reach beyond it with new feats of human creativity needs to be encouraged, rather than allowing us to use it as a safety blanket, comforting but ultimately stifling.
How exactly this technology is interpreted and assimilated by our societies will have a profound impact on their futures. As a handmaiden of human creativity and ingenuity, it promises to make us even more effective and efficient. But if we instead place it and the answers it gives us on a pedestal and begin to wonder why we should bother to learn anything ourselves, we risk a profound and negative rearrangement of our values. We already need to start taking decisions to avoid that outcome right now, because the future is already here. Just ask ChatGPT yourself.