OpenAI to 'Pause' Voice Linked to Scarlett Johansson

OpenAI says its 'Sky' artificial intelligence voice was made in collaboration with a professional actress and is not meant to sound like film star Scarlett Johansson. Noam Galai / GETTY IMAGES NORTH AMERICA/AFP
OpenAI says its 'Sky' artificial intelligence voice was made in collaboration with a professional actress and is not meant to sound like film star Scarlett Johansson. Noam Galai / GETTY IMAGES NORTH AMERICA/AFP
TT
20

OpenAI to 'Pause' Voice Linked to Scarlett Johansson

OpenAI says its 'Sky' artificial intelligence voice was made in collaboration with a professional actress and is not meant to sound like film star Scarlett Johansson. Noam Galai / GETTY IMAGES NORTH AMERICA/AFP
OpenAI says its 'Sky' artificial intelligence voice was made in collaboration with a professional actress and is not meant to sound like film star Scarlett Johansson. Noam Galai / GETTY IMAGES NORTH AMERICA/AFP

Movie star Scarlett Johansson said Monday she was "shocked" by an OpenAI synthetic voice that sounds like her, which was released after she declined to work with the ChatGPT-maker on such a project.
The artificial intelligence powerhouse headed by Sam Altman said it was working on temporarily muting the Johannson-sounding voice it calls "Sky."
"I was shocked, angered, and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets couldn't tell the difference," Johannson said in a statement.
Johannson said Altman in September offered to hire her to work with OpenAI to create a synthetic voice, saying it might provide people comfort engaging with AI.
Altman has previously pointed to the Johansson-voiced character in the movie "Her" -- a cautionary tale about the future in which a man falls in love with an AI chatbot -- as inspiration for where he would like AI interactions to go.
Johannson said Altman insinuated the similarity in voices was intentional when at one point he fired off a single-word tweet on X: "Her."
OpenAI said in a blog post that the "Sky" voice at issue was based on the natural speaking voice of a different professional actress and not meant to sound like Johansson.
"We believe that AI voices should not deliberately mimic a celebrity's distinctive voice," OpenAI said in the post.
"Sky's voice is not an imitation of Scarlett Johansson."
OpenAI is working on a way to "pause" Sky as it addresses what appears to be confusion about who it sounds like, the company said on X.
"We've heard questions about how we chose the voices in ChatGPT, especially Sky," OpenAI said.
Johansson said she has asked OpenAI for a detailed accounting of how "Sky" was made.
Risk team disbanded
The company explained that it worked with professional voice actors on synthetic voices it named Breeze, Cove, Ember, Juniper and Sky.

But Sky became the focus of attention last week when OpenAI released a higher-performing and even more humanlike "GPT-4o" version of the artificial intelligence technology that underpins ChatGPT.
In a demo, the new version of Sky was at times even flirtatious and funny, capable of seamlessly jumping from one topic to the next, unlike most existing chatbots.
So far in the AI frenzy, most tech giants have been reluctant to overly humanize chatbots.
Microsoft Vice President Yusuf Mehdi told AFP his company, which has a partnership with OpenAI, sought to make sure that AI was not "a he or a she," but rather a "unique entity."
"It should not be human. It shouldn't breathe. You should be able to...understand (it) is AI," he said.
Just days ago OpenAI said it disbanded a team devoted to mitigating the long-term dangers of artificial intelligence.
OpenAI began dissolving the so-called "superalignment" group weeks ago, integrating members into other projects and research.
Company co-founder Ilya Sutskever and superalignment team co-leader Jan Leike announced their departures from the San Francisco-based firm last week.



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.