OpenAI Reveals Sora, a Tool to Make Instant Videos from Written Prompts 

The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, March 21, 2023, in Boston. (AP)
The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, March 21, 2023, in Boston. (AP)
TT
20

OpenAI Reveals Sora, a Tool to Make Instant Videos from Written Prompts 

The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, March 21, 2023, in Boston. (AP)
The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, March 21, 2023, in Boston. (AP)

The maker of ChatGPT on Thursday unveiled its next leap into generative artificial intelligence with a tool that instantly makes short videos in response to written commands.

San Francisco-based OpenAI’s new text-to-video generator, called Sora, isn’t the first of its kind. Google, Meta and the startup Runway ML are among the other companies to have demonstrated similar technology.

But the high quality of videos displayed by OpenAI — some after CEO Sam Altman asked social media users to send in ideas for written prompts — astounded observers while also raising fears about the ethical and societal implications.

“An instructional cooking session for homemade gnocchi hosted by a grandmother social media influencer set in a rustic Tuscan country kitchen with cinematic lighting,” was a prompt suggested on X by a freelance photographer from New Hampshire. Altman responded a short time later with a realistic video that depicted what the prompt described.

The tool isn’t yet publicly available and OpenAI has revealed limited information about how it was built. The company, which has been sued by some authors and The New York Times over its use of copyrighted works of writing to train ChatGPT, also hasn’t disclosed what imagery and video sources were used to train Sora. (OpenAI pays an undisclosed fee to The Associated Press to license its text news archive).

OpenAI said in a blog post that it’s engaging with artists, policymakers and others before releasing the new tool to the public.

“We are working with red teamers  —  domain experts in areas like misinformation, hateful content, and bias  —  who will be adversarially testing the model,” the company said. “We’re also building tools to help detect misleading content such as a detection classifier that can tell when a video was generated by Sora.”



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.