An image of writers celebrating their good fortune in a robot-powered utopia. Courtesy of everyone’s favorite pundit, Chat-GPT, aka DALL-E.
Beware of Lazy Historical Analogies
Tyler Cowen is hailed as one of our most astute futurists, but sometimes I wonder what the heck he's thinking.
Case in point: this recent Bloomberg column, in which Cowen argues something like the following:
AI is going to radically disrupt society. The biggest hit will be to warfare, because AI is the next step in a global arms race that started with the horse. The next biggest hit will be to knowledge workers, because AI is going to replace their labor, much as the printing press replaced the labor of scribes and orators in the pre-modern era.
I don't think people like to read quotes from articles, but for those who are interested, here are some quotes:
AI may severely limit, for instance, the status and earnings of the so-called “wordcel” class. It will displace many jobs that deal with words and symbols, or make them less lucrative, or just make those who hold them less influential. Knowing how to write well won’t be as valuable a skill five years from now, because AI can improve the quality of just about any text. … Even if AIs can’t write better books than human authors, readers may prefer to spend their time talking to AIs rather than reading …
It is not hard to imagine a world, less than 20 years from now, in which a skilled carpenter is seen to have better prospects — professional and otherwise — than an articulate lawyer …
One of the better analogies for AI is the printing press. … The printing press helped to birth the Scientific Revolution in England and later the Industrial Revolution. It made democracy and near-universal literacy possible. It gave everyone a chance to read the world’s great books …
AI already is playing a critical role in military operations, whether it be drone warfare or Ukraine’s use of AI to destroy Russian military equipment on the battlefield …
Such an arms race is scary. But the larger, sadder truth is that history is a series of such arms races, starting with the use of the horse. …
When dynamic technologies interact with static institutions, conflict is inevitable, and AI makes social disruption for the wordcel class and a higher-stakes arms race are more likely.
Coming from a pundit widely regarded as a lucid forecaster of technological change, this has to be some of the airiest speculation I've ever seen. AI isn't like the printing press! AI isn't like a horse!
Cowen's precis of history is so pat that I don't feel bad, myself, for being a relatively uninformed armchair theorist. To present my own, non-expert opinion: the printing press made copies out of original works. Generative AI makes original works out of copies. These processes are … not the same? In war, horsepower helped humans move around the battlefield. AI power takes humans off the battlefield. These changes are … profoundly different?
The only real point Cowen has to make is something like, "New technologies transform society." And this is news?
There's such a paucity of imagination when it comes to current AI commentary. Just a total lack of interest in connecting the dots. Even the vaguest arguments become internally contradictory. Cowen argues that AI will supplant intellectuals because it's essentially like the printing press. But in the same column, he points out that the printing press radically increased the status and reach of intellectuals. So which is it—is history going to repeat, or is everything going to be different this time? Cowen seems to argue that the general pattern will repeat (things will change) but that the details will be different (intellectuals will lose power instead of gaining power). But the first point is a vapid truism while the second is unsupported conjecture. Where's the meat?
Then he has a weird aside about how carpenters are going to see their status increase, because their work, presumably, is relatively immune to AI disruption. Huh? Didn't carpenters *already* undergo a loss of status because of industrial-scale automation? Why would their status bounce back with more automation? Is the idea that intellectuals will sink so fast that the already-sunken carpenters will look prosperous by comparison? Or that the field of carpentry has already undergone the ruthless winnowing that the field of intellectualism is about to experience? But why would that be good for carpenters, exactly? It certainly doesn't foretell a huge rise in carpentry jobs. If anything, Cowen's argument suggests that fields like carpentry and plumbing—fields where human dexterity still commands a premium—will be swamped by unemployed intellectuals looking for other careers. Which will supercharge competition for the limited number of carpentry jobs available, which will be terrible for carpenters.
And suppose carpenters do experience some radical increase in status, driven by comparatively high demand for their skills. If we're going to disrupt the art of war by building high-precision drone soldiers, why can't we disrupt the art of carpentry by building high-precision drone carpenters? Elsewhere in his column, Cowen suggests that AIs will replace doctors, too. So AIs are going to replace doctors, writers, and soldiers, but not carpenters? What makes the field of carpentry so special?
There's just a total unwillingness to think through any of the actual issues on the table. Instead we get a kind of excitable futurism in which a pundit points at a current trend and starts hopping up and down shouting, "What if more!?! What if more!?!" Cowen sees that we have robots on the battlefield. Ah, but what if we have more? He's heard that students use AI to cheat on their homework. What if that happens even more? He knows people often check the internet for diagnoses. What if people did that more?
I guess he hasn't read any articles about AI disrupting carpentry yet. God help us when he does.
Seven Reasons To Be Bullish on Wordcels
Is AI going to be terrible for intellectuals? Here are a couple of reasons, off the top of my head, to think Cowen's prediction might be wrong.
1. It's never worked that way before. As long as we're drawing hasty historical analogies, it's probably worth pointing out that every previous technological advance has ultimately led to the development of larger societies with increased social complexity and a larger clerical and administrative class. Every time!
2. Authenticity bias. People show a strong and durable preference for direct and indirect contact with other people. Language is a popular medium of human interaction (citation needed). As productivity increases, people will have the luxury to commit more resources to that preference. I'm no stemcel, but I like that math.
3. Competition at the margin. I work in an email job. Boy, let me tell you, AI makes me so much more efficient. Every time I need to send an email, I take notes on what I want to say, craft a prompt, enter it into ChatGPT, check the draft, refine the prompt, and … no! I just bang out the email. In a world where everyone has access to chatbots, the ability to add value on top of the new baseline becomes an advantage.
4. AI automates everything else, too. Pity the poor scribes, Cowen tells us. As AI automates the process of writing, skill with words will become less valuable. Except that AI automates illustration, too. And animation, and voice-chat, and music, and, increasingly, video production. And coding, and math, and signals processing, and pattern recognition, and data analysis, and driving, and piloting, and porn, and gaming, and the use of tools and machines …
I don’t see why language use would be at such a disadvantage compared to other skillsets. Isn't the low-skill, highly embodied service industry—the backbone of the U.S. economy—already undergoing radical disruption?
5. Language skills have already survived extensive disruption. There's an idea in Cowen's circles that manual labor got disrupted three hundred years ago, and now, by some law of technological karma, it's the knowledge workers' turn. But, as he observes in his column, skills can be outmoded in multiple ways—they can be automated, they can be demoted, or they can be outcompeted. Elite wordsmithing—the aesthetic use of language in poems and books—is a skill that has not, until recently, been automated. But it has been obsolesced through competition. The publishing business is dwarfed by the music and film and game industries. Journalism has been pummeled by digital distribution. The humanities have gotten crushed by the rise of data-driven fields. Recreational reading is a niche interest. Americans have never been less interested in showing off their skillful use of semicolons and five-dollar words. And most of these changes were well underway generations before AI came on the scene. If intellectuals still command high status, in other words, it's not because of a failure to automate the process of essay composition. Cowen has a tossed-off remark about how people are going to interact with technology instead of reading books in the future. Where have you been, man? They already do that.
6. AI makes it easier to be a total slob. Cowen's argument echoes earlier predictions that computer technology would lower the status of intellectuals by making everyone better informed. Instead, some people became better informed, and some people opted to use computers to drive themselves nuts with brain-rotting nonsense. If the same thing happens with AIs and chatbots—if millions of people become weird shut-ins who spend all day talking to robots that validate their eccentricities—then economic spoils will go to those who manage to resist technology's corrupting effects.
7. Language has high value as a signaling mechanism. In many pursuits, thinking well is an asset. How can you tell if someone thinks well? A tried-and-true option is to have them share their thoughts using words. If AI adds noise to this signaling mechanism, people might very well choose to invest in purifying the signal—say, by giving proctored exams, or in-person interviews, or coming up with technical solutions to AI fakery, or punishing acts of deception so harshly that AI-fraud is massively disincentivized. This is already underway in academic and professional spheres. If AI disrupts the dating market, and women increasingly find themselves beset by AI-augmented pickup artists, expect to see even more action on this front.
Put it all together, and I think you can glimpse a different future, one in which the success of our complex, highly automated society becomes dependent on a class of people who know how to use—and not be used by—AI-powered systems. Such people would have to be good at high-level thinking, since most tasks would be handled by machines. They'd have to have the discipline not to get distracted by endless floods of AI-generated content. They'd have to be able to step in and solve problems when automated systems failed. And they'd probably have to be pretty well socialized, since most of these tasks would be done in teams.
Not a word of this forecast rests on the claim that AIs *can't* duplicate human language skills. But you're still looking at an intensification of the same knowledge economy we have now, with elites paying through-the-roof prices to teach their kids abstract thinking, self-control, cognitive empathy, intercultural fluency, and so on—the very aptitudes Cowen deplores.
What We’re Really Talking about Here
All in all, there's a strange archaism to the debates we're having about AI, as if people would rather rehash arguments from the nineteen-fifties than grapple with what's happening in front of our eyes. What if a robot suddenly declared its ardent love for a human user? That happened last year, and it was a joke. What if an AI passed the Turing test? For all intents and purposes, that's now happening every day. What if there were people who spent all day interacting with robots instead of with other human beings? Have you heard of the gaming industry? What if we developed a "general" AI that could tackle challenges in multiple domains? Done. What if a robot declared its intent to wipe out humanity? Yawn. Wait, but what if we had killer robots that were actually attacking human—? Yeah, yeah, yeah. Old news. Next?
Wait, but here's the kicker. What if we had robots that could write novels?
And this would do what? Help to meet our society's vast excess demand for novels? Drive down the average compensation for novel writing from basically nothing to actually nothing? Add efficiency to the publishing business? The core challenge of contemporary publishing is grappling with our current, enormous, embarrassing oversupply of novels. We have absolutely no need for any more novels. Talk about a solution in search of a problem.
This same fuzzy thinking, this overreliance on yesterday's futurism, pervades Cowen's essay. Hoary tropes and science-fictional fables pile up sentence by sentence, paragraph by paragraph, making it hard to figure out exactly what he's predicting. Is he arguing that AI-driven text generators will take the guff out of fancy pants writers while leaving everyone else comparatively unscathed? As noted, the fancy-pants writers have already been cut down to size. Is he suggesting that AIs are going to take over intellectual work altogether—problem solving, innovation, the generation of ideas? That would be a bigger blow to the worlds of business and science than to journalism or creative writing, since STEM-powered fields like tech and medicine are where most kinds of innovation now happen. Or is he saying that AIs are going to take over the work of communicating overall—speaking, writing, chatting, conversing? That would amount to the disruption, if not the eradication, of social life itself. In such a future, the woes of out-of-work essayists would be the least of our worries.
Here's what I think is really going on. What Cowen and people like him foresee is a technological revolution that, carried through, would ultimately put millions of people out of work. That's the end result of the prophecy they're trumpeting. It would put huge numbers of academics out of work, including scientists and engineers. It would put administrative assistants out of work. It would put your kids' teacher out of work. It would put entertainers and graphic designers and video-game creators out of work. It would put coders and therapists and truckers and librarians and shopkeepers and greeters and personal assistants and nurses and servers and pharmacists and HR specialists and middle managers and porn stars out of work. It would put millions of businesspeople and financiers and economists—Cowen's audience--out of work. It would probably put you out of work.
That's what affordable, unregulated, widely deployed, advanced AI would do. That's the business model. That's the pitch. That's what the fat cats are salivating for. You don't pour billions of dollars into a startup because you think you have a crackerjack plan to replace the nation's English professors. You do it because you think you have the goods to replace almost anyone.
But Cowen can't come out and say all that. Too dystopian. Too much in line with what the neo-Luddites are yammering about. And so we get this weird attempt to situate his apocalyptic predictions on the safe terrain of a familiar culture war clash, leading to the implication that AIs will stick it to those annoying, useless Critical Theory majors while blue-collar heroes somehow catch a break.
Or perhaps Cowen's goal is simply to situate himself on the right side of the trend—to argue that AI will put everyone out of work except people savvy enough to know that AI will put everyone out of work. Get with the program, losers! But what's the use of publishing that idea unless exchanging and understanding ideas confers some actionable advantage? This is what I can never understand about arguments like Cowen's. When you actually try to think through the implications, you end up concluding either that:
1. Robots are going to put everyone out of work except people smart enough to understand robots—in which case brainwork will be more important in the future, not less so,
or that,
2. Robots will put everyone out of work except people who are good at interacting with other people—in which case the ability to communicate clearly will be an asset,
or that,
3. Robots are going to put everyone out of work, period—in which case it won't matter whether you're a writer or a carpenter or a soldier or what-have-you.
So I can see a case for writing becoming less important, sure—but in each of these scenarios, the aptitudes associated with writing become more important than ever. To get to a point where that isn't true, you have to believe that AI will lead people to lose interest in formulating and sharing their opinions, period. And if you find yourself making that prediction, you might as well just admit that the future looks so crazy and unknowable that pretty much anything could happen. Which undermines the basis for making any forecasts whatsoever.
Or, to put it bluntly: if Tyler Cowen really believes what he's saying, why doesn't he just shut up?