Putin to Boost AI Work in Russia to Fight ‘Dangerous’ Western Monopoly

This pool photograph distributed by Russian state agency Sputnik shows Russia's President Vladimir Putin (C), accompanied by Russia's top lender Sberbank CEO German Gref (L), touring an exhibition on the sidelines of an AI (artificial intelligence) conference in Moscow on November 24, 2023. (AFP)
This pool photograph distributed by Russian state agency Sputnik shows Russia's President Vladimir Putin (C), accompanied by Russia's top lender Sberbank CEO German Gref (L), touring an exhibition on the sidelines of an AI (artificial intelligence) conference in Moscow on November 24, 2023. (AFP)
TT

Putin to Boost AI Work in Russia to Fight ‘Dangerous’ Western Monopoly

This pool photograph distributed by Russian state agency Sputnik shows Russia's President Vladimir Putin (C), accompanied by Russia's top lender Sberbank CEO German Gref (L), touring an exhibition on the sidelines of an AI (artificial intelligence) conference in Moscow on November 24, 2023. (AFP)
This pool photograph distributed by Russian state agency Sputnik shows Russia's President Vladimir Putin (C), accompanied by Russia's top lender Sberbank CEO German Gref (L), touring an exhibition on the sidelines of an AI (artificial intelligence) conference in Moscow on November 24, 2023. (AFP)

Russian President Vladimir Putin on Friday announced a plan to endorse a national strategy for the development of artificial intelligence, emphasizing that it's essential to prevent a Western monopoly.

Speaking at an AI conference in Moscow, Putin noted that “it’s imperative to use Russian solutions in the field of creating reliable and transparent artificial intelligence systems that are also safe for humans.”

“Monopolistic dominance of such foreign technology in Russia is unacceptable, dangerous and inadmissible,” Putin said.

He noted that “many modern systems, trained on Western data are intended for the Western market” and “reflect that part of Western ethics, norms of behavior, public policy to which we object.”

During his more than two decades in power, Putin has overseen a multi-pronged crackdown on the opposition and civil society groups, and promoted “traditional values” to counter purported Western influence — policies that have become even more oppressive after he sent troops into Ukraine in February 2022.

Putin warned that algorithms developed by Western platforms could lead to a digital “cancellation” of Russia and its culture.

“An artificial intelligence created in line with Western standards and patterns could be xenophobic,” Putin said.

“Western search engines and generative models often work in a very selective, biased manner, do not take into account, and sometimes simply ignore and cancel Russian culture,” he said.

“Simply put, the machine is given some kind of creative task, and it solves it using only English-language data, which is convenient and beneficial to the system developers. And so an algorithm, for example, can indicate to a machine that Russia, our culture, science, music, literature simply do not exist.”

He pledged to pour additional resources into the development of supercomputers and other technologies to help intensify national AI research.

“We are talking about expanding fundamental and applied research in the field of generative artificial intelligence and large language models,” Putin said.

“In the era of technological revolution, it is the cultural and spiritual heritage that is the key factor in preserving national identity, and therefore the diversity of our world, and the stability of international relations,” Putin said. “Our traditional values, the richness and beauty of the Russian languages and languages of other peoples of Russia must form the basis of our developments,” helping create “reliable, transparent and secure AI systems.”

Putin emphasized that trying to ban AI development would be impossible, but noted the importance of ensuring necessary safeguards.

“I am convinced that the future does not lie in bans on the development of technology, it is simply impossible,” he said. “If we ban something, it will develop elsewhere, and we will only fall behind, that's all.”

Putin added that the global community will be able to work out the security guidelines for AI once it fully realizes the risks.

“When they feel the threat of its uncontrolled spread, uncontrolled activities in this sphere, a desire to reach agreement will come immediately,” he said.



OpenAI, Anthropic Sign Deals with US Govt for AI Research and Testing

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
TT

OpenAI, Anthropic Sign Deals with US Govt for AI Research and Testing

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)

AI startups OpenAI and Anthropic have signed deals with the United States government for research, testing and evaluation of their artificial intelligence models, the US Artificial Intelligence Safety Institute said on Thursday.

The first-of-their-kind agreements come at a time when the companies are facing regulatory scrutiny over safe and ethical use of AI technologies.

California legislators are set to vote on a bill as soon as this week to broadly regulate how AI is developed and deployed in the state.

Under the deals, the US AI Safety Institute will have access to major new models from both OpenAI and Anthropic prior to and following their public release.

The agreements will also enable collaborative research to evaluate capabilities of the AI models and risks associated with them, Reuters reported.

"We believe the institute has a critical role to play in defining US leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on," said Jason Kwon, chief strategy officer at ChatGPT maker OpenAI.

Anthropic, which is backed by Amazon and Alphabet , did not immediately respond to a Reuters request for comment.

"These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI," said Elizabeth Kelly, director of the US AI Safety Institute.

The institute, a part of the US commerce department's National Institute of Standards and Technology (NIST), will also collaborate with the U.K. AI Safety Institute and provide feedback to the companies on potential safety improvements.

The US AI Safety Institute was launched last year as part of an executive order by President Joe Biden's administration to evaluate known and emerging risks of artificial intelligence models.