UN General Assembly to Address AI's Potential Risks, Rewards

The UN General Assembly chamber is seen in February 2023. Yuki IWAMURA / AFP/File
The UN General Assembly chamber is seen in February 2023. Yuki IWAMURA / AFP/File
TT

UN General Assembly to Address AI's Potential Risks, Rewards

The UN General Assembly chamber is seen in February 2023. Yuki IWAMURA / AFP/File
The UN General Assembly chamber is seen in February 2023. Yuki IWAMURA / AFP/File

The UN General Assembly will turn its attention to artificial intelligence on Thursday, weighing a resolution that lays out the potentially transformational technology's pros and cons while calling for the establishment of international standards.
The text, co-sponsored by dozens of countries, emphasizes the necessity of guidelines "to promote safe, secure and trustworthy artificial intelligence systems," while excluding military AI from its purview, AFP said.
On the whole, the resolution focuses more on the technology's positive potential, and calls for special care "to bridge the artificial intelligence and other digital divides between and within countries."
The draft resolution, which is the first on the issue, was brought forth by the United States and will be submitted for approval by the assembly on Thursday.
It also seeks "to promote, not hinder, digital transformation and equitable access" to AI in order to achieve the UN's Sustainable Development Goals, which aim to ensure a better future for humanity by 2030.
"As AI technologies rapidly develop, there is urgent need and unique opportunities for member states to meet this critical moment with collective action," US Ambassador to the UN Linda Thomas-Greenfield said, reading a joint statement by the dozens of co-sponsor countries.
According to Richard Gowan, an analyst at the International Crisis Group, "the emphasis on development is a deliberate effort by the US to win goodwill among poorer nations."
"It is easier to talk about how AI can help developing countries progress rather than tackle security and safety topics head-on as a first initiative," he said.
'Male-dominated algorithms'
The draft text does highlight the technology's threats when misused with the intent to cause harm, and also recognizes that without guarantees, AI risks eroding human rights, reinforcing prejudices and endangering personal data protection.
It therefore asks member states and stakeholders "to refrain from or cease the use of artificial intelligence systems that are impossible to operate in compliance with international human rights law or that pose undue risks to the enjoyment of human rights."
Warnings against the technology have become increasingly prevalent, particularly when it comes to generative AI tools and the risks they pose for democracy and society, particularly via fake images and speech shared in a bid to interfere in elections.
UN Secretary-General Antonio Guterres has made AI regulation a priority, calling for the creation of a UN entity modeled on other UN organizations such as the International Atomic Energy Agency (IAEA).
He has regularly highlighted the potential for disinformation and last week warned of bias in technologies designed mainly by men, which can result in algorithms that ignore the rights and needs of women.
"Male-dominated algorithms could literally program inequalities into activities from urban planning to credit ratings to medical imaging for years to come," he said.
Gowan of the International Crisis Group said he didn't "think the US wants Guterres leading this conversation, because it is so sensitive" and was therefore "stepping in to shape the debate."
A race is underway between various UN member states, the United States, China and South Korea, to be at the forefront of the issue.
In October, the White House unveiled rules intended to ensure that the United States leads the way in AI regulation, with President Joe Biden insisting on the need to govern the technology.



OpenAI, Anthropic Sign Deals with US Govt for AI Research and Testing

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
TT

OpenAI, Anthropic Sign Deals with US Govt for AI Research and Testing

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)

AI startups OpenAI and Anthropic have signed deals with the United States government for research, testing and evaluation of their artificial intelligence models, the US Artificial Intelligence Safety Institute said on Thursday.

The first-of-their-kind agreements come at a time when the companies are facing regulatory scrutiny over safe and ethical use of AI technologies.

California legislators are set to vote on a bill as soon as this week to broadly regulate how AI is developed and deployed in the state.

Under the deals, the US AI Safety Institute will have access to major new models from both OpenAI and Anthropic prior to and following their public release.

The agreements will also enable collaborative research to evaluate capabilities of the AI models and risks associated with them, Reuters reported.

"We believe the institute has a critical role to play in defining US leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on," said Jason Kwon, chief strategy officer at ChatGPT maker OpenAI.

Anthropic, which is backed by Amazon and Alphabet , did not immediately respond to a Reuters request for comment.

"These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI," said Elizabeth Kelly, director of the US AI Safety Institute.

The institute, a part of the US commerce department's National Institute of Standards and Technology (NIST), will also collaborate with the U.K. AI Safety Institute and provide feedback to the companies on potential safety improvements.

The US AI Safety Institute was launched last year as part of an executive order by President Joe Biden's administration to evaluate known and emerging risks of artificial intelligence models.