Summit Host South Korea Says World Must Cooperate on AI Technology

 South Korea's Minister of Science and ICT, Lee Jong-ho, speaks during a press briefing following the ministers' session of AI Seoul Summit in Seoul, South Korea, Wednesday, May 22, 2024. (AP)
South Korea's Minister of Science and ICT, Lee Jong-ho, speaks during a press briefing following the ministers' session of AI Seoul Summit in Seoul, South Korea, Wednesday, May 22, 2024. (AP)
TT

Summit Host South Korea Says World Must Cooperate on AI Technology

 South Korea's Minister of Science and ICT, Lee Jong-ho, speaks during a press briefing following the ministers' session of AI Seoul Summit in Seoul, South Korea, Wednesday, May 22, 2024. (AP)
South Korea's Minister of Science and ICT, Lee Jong-ho, speaks during a press briefing following the ministers' session of AI Seoul Summit in Seoul, South Korea, Wednesday, May 22, 2024. (AP)

South Korea's science and information technology minister said on Wednesday the world must cooperate to ensure the successful development of AI, as a global summit on the rapidly evolving technology hosted by his country wrapped up.

The AI summit in Seoul, which is being co-hosted with Britain, discussed concerns such as job security, copyright and inequality on Wednesday, after 16 tech companies signed a voluntary agreement to develop AI safely a day earlier.

A separate pledge was signed on Wednesday by 14 companies including Alphabet's Google, Microsoft, OpenAI and six Korean companies to use methods such as watermarking to help identify AI-generated content, as well as ensure job creation and help for socially vulnerable groups.

"Cooperation is not an option, it is a necessity," Lee Jong-Ho, South Korea's Minister of Science and ICT (information and communication technologies), said in an interview with Reuters.

"The Seoul summit has further shaped AI safety talks and added discussions about innovation and inclusivity," Lee said, adding he expects discussions at the next summit to include more collaboration on AI safety institutes.

The first global AI summit was held in Britain in November, and the next in-person gathering is due to take place in France, likely in 2025.

Ministers and officials from multiple countries discussed on Wednesday cooperation between state-backed AI safety institutes to help regulate the technology.

AI experts welcomed the steps made so far to start regulating the technology, though some said rules needed to be enforced.

"We need to move past voluntary... the people affected should be setting the rules via governments," said Francine Bennett, Director at the AI-focused Ada Lovelace Institute.

AI services should be proven to meet obligatory safety standards before hitting the market, so companies equate safety with profit and stave off any potential public backlash from unexpected harm, said Max Tegmark, President of Future of Life Institute, an organization vocal about AI systems' risks.

South Korean science minister Lee said that laws tended to lag behind the speed of advancement in technologies like AI.

"But for safe use by the public, there needs to be flexible laws and regulations in place."



OpenAI Appoints Former Top US Cyberwarrior Paul Nakasone to its Board of Directors

OpenAI showed off the latest update to its artificial intelligence model, which can mimic human cadences in its verbal responses and can even try to detect people’s moods. - The AP.
OpenAI showed off the latest update to its artificial intelligence model, which can mimic human cadences in its verbal responses and can even try to detect people’s moods. - The AP.
TT

OpenAI Appoints Former Top US Cyberwarrior Paul Nakasone to its Board of Directors

OpenAI showed off the latest update to its artificial intelligence model, which can mimic human cadences in its verbal responses and can even try to detect people’s moods. - The AP.
OpenAI showed off the latest update to its artificial intelligence model, which can mimic human cadences in its verbal responses and can even try to detect people’s moods. - The AP.

OpenAI has appointed a former top US cyberwarrior and intelligence official to its board of directors, saying he will help protect the ChatGPT maker from “increasingly sophisticated bad actors.”

Retired Army Gen. Paul Nakasone was the commander of US Cyber Command and the director of the National Security Agency before stepping down earlier this year.

He joins an OpenAI board of directors that's still picking up new members after upheaval at the San Francisco artificial intelligence company forced a reset of the board's leadership last year. The previous board had abruptly fired CEO Sam Altman and then was itself replaced as he returned to his CEO role days later, Reuters.

OpenAI reinstated Altman to its board of directors in March and said it had “full confidence” in his leadership after the conclusion of an outside investigation into the company’s turmoil. OpenAI's board is technically a nonprofit but also governs its rapidly growing business.

Nakasone is also joining OpenAI's new safety and security committee — a group that's supposed to advise the full board on “critical safety and security decisions” for its projects and operations. The safety group replaced an earlier safety team that was disbanded after several of its leaders quit.

Nakasone was already leading the Army branch of US Cyber Command when then-President Donald Trump in 2018 picked him to be director of the NSA, one of the nation's top intelligence posts, and head of US Cyber Command. He maintained the dual roles when President Joe Biden took office in 2021. He retired in February.