It’s a perilous moment for creative life in America. While supporting oneself as an artist has never been easy, the power of generative A.I. is pushing creative workers to confront an uncomfortable question: Is there a place for paid creative work within late capitalism? And what will happen to our cultural landscape if the answer turns out to be no?
As sociologists who study the relationship between technology and society, we’ve spent the last year posing questions to creative workers about A.I. We’ve talked to book authors, screenwriters, voice actors and visual artists. We’ve interviewed labor leaders, lawyers and technologists. Our takeaway from these conversations: What A.I. imperils is not human creativity itself but the ability to make a living from creative endeavor.
The threat is monumental but the outcome is not inevitable. The actions that artists, audiences and regulators take in the next few years will shape the future of the arts for a long time to come.
In a short span of time, A.I.-generated content has become ubiquitous. Prose written in A.I.’s unmistakably tedious style is pervasive, while in recent months, newer tools like Sora 2 and Suno have filled the internet with hit country songs and squishy mochi-ball cats.
The question that often surrounds the introduction of a generative A.I. model is whether or not it’s capable of producing art at a level that competes with humans. But the creative workers we spoke with were largely uninterested in this benchmark. If A.I. can produce work that’s comparable to that of humans, they felt, that’s only because it stole from them.
Karla Ortiz, an illustrator, painter and concept artist, described the moment she witnessed A.I. churning out art in her style. “It felt like a gut punch,” she said. “They were using my reputation, the work that I trained for decades, my whole life to do, and they were just using it to provide their clients with imagery that tries to mimic me.”
The proponents of A.I. often claim that, as good as it may get, the technology will never be able to match the talent and ingenuity of superlative human-made art. Amit Gupta is the co-founder of Sudowrite, an A.I. tool designed for writing. He believes that A.I. “will help us get to the 80 percent mark, maybe the 90 percent mark” of human writing quality, “but we’re still going to be able to discern that last bit.” Anyone with an iPhone can take a very good photo, Gupta has pointed out, but “there are still photographs that hang in museums; they’re not the photographs that you and I took.”
Sam Altman, the chief executive of OpenAI, similarly talked about how A.I. will eventually replace the “median human” in most fields, but not the top performers. However, there’s a problem with this line of reasoning: Sui generis artistic prodigies are few and far between. Artists, like most people trying to do something hard, tend to get better with lots of practice. Someone who is, to borrow Mr. Altman’s phrase, a “median” writer in their 20s might turn into a great one by their 40s by putting in ample time and work.
The creative grunt work that A.I. stands to replace most quickly is what helps emerging artists improve, not to mention pay their bills. In the early years of her career, Ms. Ortiz supported herself coloring comics and making art for video game companies. Coming from a lower-middle-class background in Puerto Rico, Ms. Ortiz said, she “would have not been able to live as an artist had I not had those jobs that a lot of folks today can’t find” because the would-be employers use A.I. instead. If an A.I. colors comics, takes notes in the TV writers’ room, and sifts through the slush pile at a publishing house, how will young creative workers master their medium — and scrape together a living while doing so?
This is not a novel phenomenon; the starving artist is a cliché for a reason. Creative and cultural labor markets have long been beset by an imbalance between supply and demand: There are more people who want to write, paint, direct, act and play music than there are paying jobs doing those things. As a result, most artists aren’t paid especially well for their most creatively fulfilling work. Historically, this has advantaged those with the connections to score, say, a coveted unpaid internship at an art gallery or a film studio — and the independent wealth to pay for food and rent while completing it.
A.I. did not create these inequalities. But it may well exacerbate them if the technology eliminates the kind of entry-level jobs that allow early-career artists to make connections and a living, however meager, in artistic fields. Indeed, there is a prevailing fear that A.I. will be used as a pretext to eliminate jobs even if its outputs are unimpressive. When generative A.I. is put into actual practice, “its functionality is so limited and so disappointing and so mediocre,” said Larry J. Cohen, a TV writer who serves on the A.I. task force for the Writers Guild of America East. But because A.I. is surrounded by what Mr. Cohen called “a complete reality distortion field,” its mediocrity may not actually matter. Studios may use A.I. anyway because they are too nervous to miss the bandwagon.
There’s a scholarly term for this: institutional isomorphism. In a 1983 paper, the sociologists Paul DiMaggio and Walter Powell confronted an apparent puzzle: Why do organizations in a field so often resemble one another in structure, practices and products, even when it might be advantageous to differentiate themselves? DiMaggio and Powell argued that when organizations are operating in an environment of uncertainty, especially one in which “technologies are poorly understood,” they look to see what other organizations are doing and copy them. The result of this mimicry is that over time, certain modes of operation become taken for granted as the correct and legitimate ones within an organization, even if they do little to advance its aims.
Given that generative A.I. certainly qualifies as a “poorly understood” technology, we shouldn’t be surprised to see this kind of isomorphic process unfolding within media industries. In contract negotiations for W.G.A.E. unions, the guild’s executive director, Sam Wheeler, has seen media companies resist demands for A.I.-related worker protections with a stubbornness usually reserved for dollars-and-cents issues, such as employee health care costs. Companies dug their heels in about A.I. even when they seemed to have no concrete ideas about how they would actually use it. Wheeler has been struck by how “the lack of a plan” has been coupled with “the certainty that one will present itself.” And when that plan eventually emerges, the last thing executives will want is to be hamstrung by union rules.