I'm bullish on generative AI. I think the apocalyptic predictions are overblown. I think the programs we have now are really cool. I think the potential is there for something better. I think people will end up finding a thousand applications for these things in a way that drives a cumulative rise in global productivity. I think we’ll gradually devise norms and laws that mitigate the worst downstream effects. I think the programs will ultimately become a spur to creativity. That's the way these kinds of things usually go.
But can we all admit that the programs, as they exist right now, are severely limited? That we're mostly talking about a proof of concept, not a real primetime technology? That predictions of massive social change, including my own, are still extrapolating from historical growth trends, not acclaiming tools that currently exist? That this whole nascent industry, along with its attendant commentary, is still one giant collective spasm of speculation, anticipation, and conjecture?
There are good reasons to keep an eye on the frenzy. To believe the anticipation will be rewarded. That the conjectures will be vindicated. That the speculation will pay off. But we’re quite definitely not there yet. I mean, look at this:
That's what DALL-E gave me when I asked for an image of a centaur. Sometimes it comes through, sometimes it gives you … well, whatever this is. The thing is ridiculously unreliable.
I know what the response will be. Prompt engineering. Just get better at prompt engineering. It all comes down to perfecting the art of creative prompt engineering. And also parameters. And data. Just keep adding in more data. Sky’s the limit. Oh, and the context window. Gotta expand that context window. Wait’ll we hit 10 million tokens in the public-facing models, then you’ll see some real magic. We'll keep iterating on existing techniques, and the things will get better and better until one day, wham, anything will be possible.
Fine. But we're not there now. It's worth pointing this out, I think, because the hype is running ahead of the reality. There are good reasons for the hype. But it's still just hype. Will the fundamental erraticism of these programs prove to be a fatal and unconquerable flaw? Probably not! But it's a limitation now.
Anyway, these blips are minor concerns. The bigger problem is the sheer difficulty of wresting the programs out of the zone of mediocrity in which they seem to be firmly stuck. Remember, the big pitch on behalf of this technology is that it's not only going to automate rote tasks, but take over the work of creativity itself. If you think that's happening right now, you'd better back away fast, buster, because you have no idea what creativity is. Again, that's not to say these services won't become creative. Just that they're resolutely anchored, right now, in a particular genre of derivative kitsch. As always, the same caveats apply. Prompt engineering. More data. Better training. More context. Maybe people will slowly get better at yanking these generators away from their decidedly bland baseline. But now we're talking about human creativity, not machine ingenuity. Thinking people might find creative new uses for otherwise limited tools—that's one thing. Thinking the tools will find creative new uses for themselves? Not there yet.
And that brings up another point. I can't be the only one who's finding it easier and easier to spot the telltale signs of AI-generated content. A sugary slickness to the presentation. A fussy overemphasis on trivial details. An air of glossy insubstantiality, like a world baked fresh by Christmas elves. Those smothering Hallmark hues. Cluttered but uninteresting compositions. Not to mention the well-known giveaways, the word scrambles, the superfluous digits, the faces that melt into nightmare visions on close examination. This will date me, but it's like entering a world of slightly uncanny Trapper Keeper covers.
And that's just the images. There was a time, early on, when writers took to smuggling chunks of AI text into their magazine articles. Get this, they’d announce—a computer wrote that last part. Were you fooled? And at first I was. Har har, you got me. Then I got wise to the trick. I started looking for those excerpts. And guess what? They got easier and easier to spot, in part because they all sounded the same. Here's a sample of the genre:
Ultimately, there are differing opinions on the future of AI. Some experts believe AI will be a boon to humankind, increasing humanity's prosperity and wellbeing. Others experts are concerned about the downsides, fearing mass unemployment and disinformation. Still others believe machines can never replicate the depth of feeling unique to human beings. Only time will tell whether AI proves a blessing, a curse, or something in between.
Did I write that? Did an AI? Who cares? It sucks. But that's what we've seen from these programs so far: surrealism or blandness, with little in between.
Bland, bland, bland, bland … you know you're living in an AI's world when your soul starts to melt into a diffuse slurry of platitudes, kitsch, hedges, generalizations, chihuahua eyes, and underbaked non-insights, all rendered meticulously in civil-service prose and color palettes that Barbie would find tacky. That deep-down blah feeling, that great cosmic meh. Do I have a perfect track record at spotting this stuff? Surely not. What's my success rate? No way to know. But my eye has slowly gotten attuned to work that looks like it was made by AI, and I'm getting increasingly sick of the stuff. And I know you are, too, because of how we all talk. "Something that feels like it was made by AI"—why do we throw around that phrase? Because it refers to something real. Because we all know the feeling.
And we all know, too, that this kind of thing happens, reliably, with every technological change. When CGI first came out, it looked amazing. Now it looks like shimmery crap. When the PS3 came out, it looked incredible. Now it's recognizably old-gen. The same fate befalls every SFX, from greenscreen to puppetry to film itself.
Will AI beat the trend? Will it advance fast enough to outpace this human tendency to say ho-hum to what used to make us say gee-wiz? I don't know, but I think we tend to underestimate the sheer amount of work the makers and users of technology have to put into continually wowing us. Look how much they spend on those blockbuster movies. And half the time, the things still look like dreck. Then, too, if you push audiences too far, you get the uncanny valley, VR-induced nausea, 3D trends that never quite stick, the tawdry clarity of 48-HPS. Where's the optimum between awesome high-tech and "kind of a hassle, actually"? How do you know when you've found it? As with everything else, AI isn't there yet.
But it'll get there, right? It's bound to get there. Look what's been accomplished so far! Look how much money has been poured into the field! All we have to do is extrapolate from current trends and—
Yeah, see, that's the problem. Fortunes have been both made and lost on the backs of extrapolated trends. It worries me that current predictions of a coming AI explosion sound a lot like last-century's sci-fi dreams of relentlessly improving transportation. Within a century, circa 1960 or so, the world had been transformed by trains, then steamships, then dirigibles, then cars, then planes, and finally rockets. It was reasonable for people to extrapolate from that trend. Look at how quickly transportation had advanced. What could possibly be next? Flying cars? Skyhooks? Mars vacations? Interstellar travel? Teleportation?
Well, it turned out nothing was next. The trend went sigmoid. Rockets were the last great invention to take off. In the generations since, we've tweaked transportation by iterating on old designs: building faster trains, putting TVs in planes, designing fuel-efficient cars, coming up with better rockets. Progress continues, but at a slower pace. All that hype, all that fervid anticipation, marked the very peak of the transportation boom. People back then knew, intuitively, that they were at an inflection point. They just misconstrued the direction of the curve.
And while I have no special reason for thinking so, I can't shake the feeling that we're seeing something similar with AI. This is the big moment. This is as exciting as things are going to get, at least for a few more generations. This is the high before the crash, the manic turn before the depressive funk. The glimmering spaceships of science fiction are about to run aground on reality. The miracle technologies we’ve invented will get digested by the global economy, incorporated into the routines of daily life, and end up seeming … well, not bad, exactly. But not miraculous, either. Just ordinary.
Another AI winter? No. Not exactly. The first AI winter was a time when research efforts stalled. Now we’re in an age when the research has borne fruit. Call it an AI summer: all these cool new applications, hanging there plump and shiny on the vine, waiting to be plucked and put to use. But you know what comes after summer.
Maybe that’s why I feel this strange sense of anticlimax, like the feeling you get when an underground artist you’ve been following suddenly goes mainstream. LLMs have leapt out of the nerd-zone and made the transition to the mainstream. AI isn’t just research area, now. It’s applied research. It’s big business. It’s everyone’s business. It’s no longer the specialty of a weird subculture. It’s culture, period. It’s the new normal.
And that’s usually a good sign that an era of fervid innovation is about wind down. To get bogged down in bureaucracy, choked with regulations, cluttered with fads, crushed by economic pressures, incorporated into systems and institutions that slowly turn experimental settings into legacy SOPs. All the people making money off AI are going to want to keep making money—which means reliable returns, which means doing more of what’s been proven to work, which means, ultimately, managing risk and playing it safe. All those businesses and organizations investing in AI are going to want build it into a thousand different branded products, which means avoiding embarrassing racist images and chatbot freakouts and viral images of famous people with eyeballs for nipples and extra noses. All the people using AI in daily life are going to want to avoid getting screwed over by it, which means government regulations and political pressure. And all the scammers and hackers and stalkers and weirdos and creeps and conspiracy theorists and trolls and grifters who use technology for their own ends are going to gunk up the AI ecology with massive quantities of waste. The exploratory phase is over. Every big development underway now is poised to strangle further experimentation. Because experimentation is risky, experimentation is messy, experimentation is a hassle, experimentation hurts. Experimentation is only for certain kinds of people. And everyone but everyone is using AI now.
The people predicting major further advances seem to be pretty sure that we can keep leaping forward without falling on our faces. In particular, they seem to think LLMs can be configured to deliver content that’s safe without being bland—that the models can be trained or configured, at need, to meet arbitrary standards of inoffensiveness. Ideally, I suppose, private models would be calibrated to provide slightly risqué content for TV shows and comedy routines and whatnot--the way human writers do now--while public models would hew to settings that were maximally cautious. Even if that proves to be technically feasible, though, will it be socially acceptable? Will people tolerate this arrangement? Will they grant LLMs the same latitude they're willing to grant human artists? What happens if a sample of offensive content leaks from one of these private AIs? I have no idea why people are so confident this kind of finicky fine-tuning can be achieved.
The boldest prognosticators seem to think LLMs are all but destined to reproduce something akin to human consciousness, probably in the not-too-distant future. But--and I may be going to out on a limb here--they seem to believe this because they hold certain philosophical positions they expect AI to prove, namely that human thought is material in origin and can therefore be simulated on a sufficiently advanced computer. Fair enough. And with a large enough lever I could move the Earth. So what? What does that tell us about the here and now?
I don’t know. I'm trying to throw the weight of argument behind a feeling I can't quite justify. The feeling is this: hype is always wrong. It isn't just a neutral signal. It's a countersignal. When everyone says a stock's going up, the stock's almost certainly going down. When everyone says an artist's a genius, the artist is surely overrated. When everyone says a trendline's going to keep on rising forever and forever, you can bet the trend has already peaked. The mob is reliably wrong. This doesn't apply, of course, to long-held assumptions and timeless truths, which have been subject to centuries of criticism and calibration. But public opinion is wildly volatile, subject to its own boom-and-bust cycles, prone to swing between absurd extremes. Over the long haul, it might gravitate to a reasonable estimation of the underlying probabilities. In the short term, though, it always overshoots the mark.
Or, who knows, maybe this time things'll be different. Given the performance of the technology right now, though, we have to take that prediction on faith.