World Leaders Plan New Agreement on AI at Virtual Summit Co-hosted by South Korea, UK

Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. (Reuters)
Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. (Reuters)
TT

World Leaders Plan New Agreement on AI at Virtual Summit Co-hosted by South Korea, UK

Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. (Reuters)
Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. (Reuters)

World leaders are expected to adopt a new agreement on artificial intelligence when they gather virtually Tuesday to discuss AI´s potential risks but also ways to promote its benefits and innovation.
The AI Seoul Summit is a follow-up to November´s inaugural AI Safety Summit at Bletchley Park in the United Kingdom, where participating countries agreed to work together to contain the potentially "catastrophic" risks posed by galloping advances in AI.
The two-day meeting -- co-hosted by the South Korean and UK governments -- also comes as major tech companies like Meta, OpenAI and Google roll out the latest versions of their AI models, The Associated Press said.
On Tuesday evening, South Korean President Yoon Suk Yeol and British Prime Minister Rishi Sunak are to meet other world leaders, industry leaders and heads of international organizations for a virtual conference. The online summit will be followed by an in-person meeting of digital ministers, experts and others on Wednesday, according to organizers.
"It is just six months since world leaders met at Bletchley, but even in this short space of time, the landscape of AI has changed dramatically," Yoon and Sunak said in a joint article published in South Korea´s JoongAng Ilbo newspaper and the UK´s online inews site on Monday. "The pace of change will only continue to accelerate, so our work must accelerate too."
While the UK meeting centered on AI safety issues, the agenda for this week´s gathering was expanded to also include "innovation and inclusivity," Wang Yun-jong, a deputy director of national security in South Korea, told reporters Monday.
Wang said participants will subsequently "discuss not only the risks posed by AI but also its positive aspects and how it can contribute to humanity in a balanced manner."
The AI agreement will include the outcomes of discussions on safety, innovation and inclusivity, according to Park Sang-wook, senior presidential adviser for science and technology for President Yoon.
The leaders of the Group of Seven wealthy democracies -- the US, Canada, France, Germany, Italy, Japan and Britain - were invited to the virtual summit, along with leaders of Australia and Singapore and representatives from the UN, the EU, OpenAI, Google, Meta, Amazon and Samsung, according to South Korea's presidential office.
China doesn't plan to participate in the virtual summit though it will send a representative to Wednesday's in-person meeting, the South Korean presidential office said. China took part in the UK summit.
In their article, Yoon and Sunak said they plan to ask companies to do more to show how they assess and respond to risks within their organizations.
"We know that, as with any new technology, AI brings new risks, including deliberate misuse from those who mean to do us harm," they said. "However, with new models being released almost every week, we are still learning where these risks may emerge, and the best ways to manage them proportionately."
The Seoul meeting has been billed as a mini virtual summit, serving as an interim meeting until a full-fledged in-person edition that France has pledged to hold.
Governments around the world have been scrambling to formulate regulations for AI even as the technology makes rapid advances and is poised to transform many aspects of daily life, from education and the workplace to copyrights and privacy. There are concerns that advances in AI could take away jobs, trick people and spread disinformation.
Developers of the most powerful AI systems are also banding together to set their own shared approach to setting AI safety standards. Facebook parent company Meta Platforms and Amazon announced Monday they’re joining the Frontier Model Forum, a group founded last year by Anthropic, Google, Microsoft and OpenAI.
In March, the UN General Assembly approved its first resolution on the safe use of AI systems. Earlier in May, the US and China held their first high-level talks on artificial intelligence in Geneva to discuss how to address the risks of the fast-evolving technology and set shared standards to manage it.



OpenAI Outlines New For-Profit Structure in Bid to Stay Ahead in Costly AI Race

The OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
The OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
TT

OpenAI Outlines New For-Profit Structure in Bid to Stay Ahead in Costly AI Race

The OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
The OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)

OpenAI on Friday outlined plans to revamp its structure, saying it would create a public benefit corporation to make it easier to "raise more capital than we'd imagined," and remove the restrictions imposed on the startup by its current nonprofit parent.

The acknowledgement and detailed rationale behind its high-profile restructuring confirmed a Reuters report in September, which sparked debate among corporate watchdogs and tech moguls including Elon Musk.

At issue were the implications such a move might have on whether OpenAI would allocate its assets to the nonprofit arm fairly, and how the company would strike a balance between making a profit and generating social and public good as it develops AI.

Under the proposed plan, the ChatGPT maker's existing for-profit arm would become a Delaware-based public benefit corporation (PBC) - a structure designed to consider the interests of society in addition to shareholder value.

OpenAI has been looking to make changes to attract further investment, as the expensive pursuit of artificial general intelligence, or AI that surpasses human intelligence, heats up. Its latest $6.6 billion funding round at a valuation of $157 billion was contingent on whether the ChatGPT-maker could upend its corporate structure and remove a profit cap for investors within two years, Reuters reported in October.

The nonprofit, meanwhile, will have a "significant interest" in the PBC in the form of shares as determined by independent financial advisers, OpenAI said in a blog post, adding that it would be one of the "best resourced nonprofits in history."

OpenAI started in 2015 as a research-focused nonprofit but created a for-profit unit four years later to secure funding for the high costs of AI development.

Its unusual structure gave control of the for-profit unit to the nonprofit and was in focus last year when Sam Altman was fired as CEO only to return days later after employees rebelled.

'CRITICAL STEP'

"We once again need to raise more capital than we'd imagined. Investors want to back us but, at this scale of capital, need conventional equity and less structural bespokeness," the Microsoft-backed startup said on Friday.

"The hundreds of billions of dollars that major companies are now investing into AI development show what it will really take for OpenAI to continue pursuing the mission."

Its plans to create a PBC would align the startup with rivals such as Anthropic and the Musk-owned xAI, which use a similar structure and recently raised billions in funding. Anthropic garnered another $4 billion investment from existing investor Amazon.com last month, while xAI raised around $6 billion in equity financing earlier in December.

"The key to the announcement is that the for-profit side of OpenAI 'will run and control OpenAI's operations and business,'" DA Davidson & Co analyst Gil Luria said.

"This is the critical step the company needs to make in order to continue fund raising," Luria said, although he added that the move did "not necessitate OpenAI going public."

The startup could, however, face some hurdles in the plan.

Musk, an OpenAI co-founder who later left and is now one of the startup's most vocal critics, is trying to stop the plan and in August sued OpenAI and Altman.

Musk alleges that OpenAI violated contract provisions by putting profit ahead of the public good in the push to advance AI.

OpenAI earlier this month asked a federal judge to reject Musk's request and published a trove of messages with Musk to argue that he initially backed for-profit status for OpenAI before walking away from the company after failing to gain a majority equity stake and full control.

Meta Platforms is also urging California's attorney general to block OpenAI's conversion to a for-profit company, according to a copy of a letter seen by Reuters.

Becoming a benefit corporation does not guarantee in and of itself that a company will put its stated mission above profit, as that status legally requires only that the company's board "balance" its mission and profit-making concerns, said Ann Lipton, a corporate law professor at Tulane Law School.

"The only reason to choose benefit form over any other corporate form is the declaration to the public," she said. "It doesn't actually have any real enforcement power behind it," she said.

In practice, it is the shareholders who own a controlling stake in the company who dictate how closely a public benefit company sticks to its mission, Lipton said.