Parmy Olson

AI-Generated Art Sounds Alarming, But It Doesn’t Have to Be

Just a few months ago, the concept of using artificial intelligence to generate unique artwork seemed cutting-edge and futuristic. Pretty soon it will be as mundane as running a Google search.

Microsoft Corp. announced this week that it was making the most of its $1 billion investment in OpenAI, an artificial intelligence research outfit, and bringing that firm’s standout AI service to Microsoft 365, the company’s flagship bundle of software services. Microsoft Designer is powered by OpenAI’s DALL-E 2 AI technology and will generate any image that users type into a box, such as “cake with berries, bread and pastries for the fall.”

It’s a swift step forward for DALL-E 2, which was first announced just six months ago. While the Designer app is available only in beta currently, the rollout underscores how quickly art-generating AI has been moving, to the extent that artists have expressed concern. Some artist names come up especially frequently as text prompts in similar art generators, and that has some worried about what the technology will do to their careers. AI ethicists are also fretting about a flood of new fake imagery hitting the web and powering misinformation campaigns.

Yet Microsoft’s involvement in this field is good news. The company is echoing OpenAI’s limited roll out of DALL-E 2, as well as its strict rules about the types of images it will generate. For instance, DALL-E 2 bans images showing explicit sexual and violent content and does so by simply removing such images from the database of pictures used to train its model. Microsoft has said it will use similar filters.

Microsoft also said it would block text prompts on “sensitive topics,” which it didn’t elaborate on, but which will again most likely mirror DALL-E 2’s policy of banning queries related to things like politics or illegal activity, or images of well-known figures like politicians or celebrities.

There has been some hand-wringing among tech ethicists that open-source versions of this kind of technology, such as a tool released in August by British startup Stability AI, will lead to a free-for-all of fake content that will infect social networks and disrupt coming elections (think fake images of Joe Biden or Donald Trump in controversial situations).

But a carefully curated version of the technology from Microsoft seems to dampen that prospect for two reasons. First, opportunistic photo fakers are more likely to find their efforts stymied by the filters embedded in the technology. Also, as more people use such tools, the general public will become more aware that photos on the internet could be generated by AI.

It’s extraordinary that this form of creative artificial intelligence is moving so quickly and that Microsoft’s Designer tool will soon sit alongside business-software stalwarts like Word, Outlook and Excel. This is, as some have already pointed out, like clip art on steroids, limited only by a user’s imagination.

It also underscores how hard it can be to predict the direction that artificial intelligence will take. A few years ago, tech pundits widely expected that we would have self-driving trucks and cars on the road that would slash accident rates and put human drivers out of work. Now it’s artists and illustrators who have greater reason for concern, though the nature of their work may simply change. As art generation comes to the fingertips of millions, they will need to be flexible.