South Korea Summit to Target ‘Blueprint’ for Using AI in the Military 

Guests attend the opening of an international conference on the responsible use of artificial intelligence (AI) in the military domain, in Seoul, South Korea, 09 September 2024. (EPA/Yonhap)
Guests attend the opening of an international conference on the responsible use of artificial intelligence (AI) in the military domain, in Seoul, South Korea, 09 September 2024. (EPA/Yonhap)
TT

South Korea Summit to Target ‘Blueprint’ for Using AI in the Military 

Guests attend the opening of an international conference on the responsible use of artificial intelligence (AI) in the military domain, in Seoul, South Korea, 09 September 2024. (EPA/Yonhap)
Guests attend the opening of an international conference on the responsible use of artificial intelligence (AI) in the military domain, in Seoul, South Korea, 09 September 2024. (EPA/Yonhap)

South Korea convened an international summit on Monday seeking to establish a blueprint for the responsible use of artificial intelligence (AI) in the military, though any agreement is not expected to have binding powers to enforce it.

More than 90 countries including the United States and China have sent government representatives to the two-day summit in Seoul, which is the second such gathering.

At the first summit was held in Amsterdam last year, where the United States, China and other nations endorsed a modest "call to action" without legal commitment.

"Recently, in the Russia-Ukraine war, an AI-applied Ukrainian drone functioned as David's slingshot," South Korean Defense Minister Kim Yong-hyun said in an opening address.

He was referring to Ukraine's efforts for a technological edge against Russia by rolling out AI-enabled drones, hoping they will help overcome signal jamming as well as enable unmanned aerial vehicles (UAVs) to work in larger groups.

"As AI is applied to the military domain, the military's operational capabilities are dramatically improved. However, it is like a double-edged sword, as it can cause damage from abuse," Kim said.

South Korean Foreign Minister Cho Tae-yul said discussions would cover areas such as a legal review to ensure compliance with international law and mechanisms to prevent autonomous weapons from making life-and-death decisions without appropriate human oversight.

The Seoul summit hoped to agree to a blueprint for action, establishing a minimum level of guard-rails for AI in the military and suggesting principles on responsible use by reflecting principles laid out by NATO, by the US or a number of other countries, according to a senior South Korean official.

It was unclear how many nations attending the summit would endorse the document on Tuesday, which is aiming to be a more detailed attempt to set boundaries on AI use in the military, but still likely lack legal commitments.

The summit is not the only international set of discussions on AI use in the military.

UN countries that belong to the 1983 Convention on Certain Conventional Weapons (CCW) are discussing potential restrictions on lethal autonomous weapons systems for compliance with international humanitarian law.

The US government last year also launched a declaration on responsible use of AI in the military, which covers broader military application of AI, beyond weapons. As of August, 55 countries have endorsed the declaration.

The Seoul summit, co-hosted by the Netherlands, Singapore, Kenya and the United Kingdom, aims to ensure ongoing multi-stakeholder discussions in a field where technological developments are primarily driven by the private sector, but governments are the main decision makers.

About 2,000 people globally have registered to take part in the summit, including representatives from international organizations, academia and the private sector, to attend discussions on topics such as civilian protection and AI use in the control of nuclear weapons.



Social Media Companies Slam Australia's Under-16 ban

Social media companies slam Australia's under-16 ban - AFP
Social media companies slam Australia's under-16 ban - AFP
TT

Social Media Companies Slam Australia's Under-16 ban

Social media companies slam Australia's under-16 ban - AFP
Social media companies slam Australia's under-16 ban - AFP

Social media giants on Friday hit out at a landmark Australian law banning them from signing up under-16s, describing it as a rush job littered with "many unanswered questions".

The UN children's charity UNICEF Australia warned the law was no "silver bullet" against online harm and could push kids into "covert and unregulated" spaces online.

The legislation, approved by parliament on Thursday, orders social media firms to take "reasonable steps" to prevent young teens from having accounts, AFP reported. It is due to come into effect after a year.
Prime Minister Anthony Albanese said the age limit may not be implemented perfectly -- much like existing restrictions on alcohol -- but it was "the right thing to do".

The crackdown on sites like Facebook, Instagram and X would lead to "better outcomes and less harm for young Australians", he told reporters.

Platforms have a "social responsibility" to make children's safety a priority, Albanese said.

Social media firms that fail to comply with the law face fines of up to Aus$50 million (US$32.5 million) for "systemic breaches".

TikTok said it was "disappointed" in the law, accusing the government of ignoring mental health, online safety and youth experts who had opposed the ban.

"It's entirely likely the ban could see young people pushed to darker corners of the internet where no community guidelines, safety tools, or protections exist," a TikTok spokesperson said.

Tech companies said that despite the law's perceived shortcomings, they would engage with the government in shaping how it could be implemented in the next 12 months.

The legislation offers almost no details on how the rules will be enforced -- prompting concern among experts that it will be largely symbolic.

Members of the public appeared doubtful.

"I don't think it will actually change a lot because I don't see that there's really a strong way to police it," 41-year-old Emily Beall told AFP in Melbourne.

Arthur McCormack, 19, said some things he had seen on social media when he was younger were "sort of traumatic".

"I think it's good that the government is on this ban. But in terms of enforcement, I'm not sure how it will be carried out," he said.

Meta -- owner of Facebook and Instagram -- called for consultation on the rules to ensure a "technically feasible outcome that does not place an onerous burden on parents and teens".

- 'Serious concerns' -

But Meta said it was concerned "about the process, which rushed the legislation through while failing to properly consider the evidence, what industry already does to ensure age-appropriate experiences, and the voices of young people".

A Snapchat spokesperson said the company had raised "serious concerns" about the law and that "many unanswered questions" remained about how it would work.

But the company said it would engage closely with the government to develop an approach balancing "privacy, safety and practicality".

UNICEF Australia policy chief Katie Maskiell said young people need to be protected online but also included in the digital world.

"This ban risks pushing children into increasingly covert and unregulated online spaces as well as preventing them from accessing aspects of the online world essential to their wellbeing," she said.

Leo Puglisi, a 17-year-old online journalist based in Melbourne, was critical of the legislation.

He founded streaming channel 6 News, which provides hourly news bulletins on national and international issues, in 2019 at the age of 11.

- Global attention -

"We've been built up by having 13 to 15-year-olds see 6 News online and then join the team," Puglisi said in a statement.

"We have said that this ban seriously risks restricting creativity from our young people, no matter what passion or future career they want to explore," he added.

One of the biggest issues will be privacy -- what age-verification information is used, how it is collected and by whom.

Social media companies remain adamant that age verification should be the job of app stores, but the government believes tech platforms should be responsible.

Exemptions will likely be granted to some companies, such as WhatsApp and YouTube, which teenagers may need to use for recreation, school work or other reasons.

The legislation will be closely monitored by other countries, with many weighing whether to implement similar bans.

Lawmakers from Spain to Florida have proposed social media bans for young teens, although none of the measures have been implemented yet.

China has restricted access for minors since 2021, with under-14s not allowed to spend more than 40 minutes a day on Douyin, the Chinese version of TikTok.