S.Korea Approves Rules on App Store Law Targeting Apple, Google

The Apple Inc. logo is seen in the lobby of New York City's flagship Apple store, US, January 18, 2011. REUTERS/Mike Segar
The Apple Inc. logo is seen in the lobby of New York City's flagship Apple store, US, January 18, 2011. REUTERS/Mike Segar
TT
20

S.Korea Approves Rules on App Store Law Targeting Apple, Google

The Apple Inc. logo is seen in the lobby of New York City's flagship Apple store, US, January 18, 2011. REUTERS/Mike Segar
The Apple Inc. logo is seen in the lobby of New York City's flagship Apple store, US, January 18, 2011. REUTERS/Mike Segar

South Korea approved detailed rules for a law banning dominant app store operators such as Apple Inc (AAPL.O) and Alphabet's (GOOGL.O) Google from forcing software developers to use their payments systems, the country's telecommunications regulator said on Tuesday.

South Korea passed the law, an amendment to the Telecommunication Business Act, last year, Reuters reported.

It was the first such curb by a major economy on Apple and Google, which face global criticism for requiring the use of proprietary payment systems that charge commissions of up to 30%.

The rules, called the enforcement ordinance, will be put into effect on March 15. They specify that the law bars "the act of forcing a specific payment method to a provider of mobile content" by unfairly utilizing the app market operator's status, the regulator Korea Communications Commission (KCC) said in a statement.

"In order to prevent indirect regulatory avoidance, prohibited acts' types and standards have been established as tightly-knit as possible within the scope delegated by the law," said KCC Chairman Han Sang-hyuk.

Barred acts include app market operators unfairly delaying the review of mobile content, or refusing, delaying, restricting, deleting, or blocking the registration, renewal, or inspection of mobile content that uses third-party payment methods.

Potential fines for infractions will go as high as 2% of an average annual revenue from related business practices, the rules said.



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.