AI with Reasoning Power Will Be Less Predictable, Ilya Sutskever Says

 AI scientist Ilya Sutskever speaks at the NeurIPS conference in Vancouver, British Columbia, Canada December 13, 2024. (Reuters)
AI scientist Ilya Sutskever speaks at the NeurIPS conference in Vancouver, British Columbia, Canada December 13, 2024. (Reuters)
TT
20

AI with Reasoning Power Will Be Less Predictable, Ilya Sutskever Says

 AI scientist Ilya Sutskever speaks at the NeurIPS conference in Vancouver, British Columbia, Canada December 13, 2024. (Reuters)
AI scientist Ilya Sutskever speaks at the NeurIPS conference in Vancouver, British Columbia, Canada December 13, 2024. (Reuters)

Former OpenAI chief scientist Ilya Sutskever, one of the biggest names in artificial intelligence, had a prediction to make on Friday: reasoning capabilities will make technology far less predictable.

Accepting a "Test of Time" award for his 2014 paper with Google's Oriol Vinyals and Quoc Le, Sutskever said a major change was on AI's horizon.

An idea that his team had explored a decade ago, that scaling up data to "pre-train" AI systems would send them to new heights, was starting to reach its limits, he said. More data and computing power had resulted in ChatGPT that OpenAI launched in 2022, to the world's acclaim.

"But pre-training as we know it will unquestionably end," Sutskever declared before thousands of attendees at the NeurIPS conference in Vancouver. "While compute is growing," he said, "the data is not growing, because we have but one internet."

Sutskever offered some ways to push the frontier despite this conundrum. He said technology itself could generate new data, or AI models could evaluate multiple answers before settling on the best response for a user, to improve accuracy. Other scientists have set sights on real-world data.

But his talk culminated in a prediction for a future of superintelligent machines that he said "obviously" await a point with which some disagree. Sutskever this year co-founded Safe Superintelligence Inc in the aftermath of his role in Sam Altman's short-lived ouster from OpenAI, which he said within days he regretted.

Long-in-the-works AI agents, he said, will come to fruition in that future age, have deeper understanding and be self-aware. He said AI will reason through problems like humans can.

There's a catch.

"The more it reasons, the more unpredictable it becomes," he said.

Reasoning through millions of options could make any outcome non-obvious. By way of example, AlphaGo, a system built by Alphabet's DeepMind, surprised experts of the highly complex board game with its inscrutable 37th move, on a path to defeating Lee Sedol in a match in 2016.

Sutskever said similarly, "the chess AIs, the really good ones, are unpredictable to the best human chess players."

AI as we know it, he said, will be "radically different."



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.