Britain to Give New Tech Regulator Statutory Powers

Skyscrapers in The City of London financial district are seen from City Hall in London, Britain, May 8, 2021. REUTERS/Henry Nicholls
Skyscrapers in The City of London financial district are seen from City Hall in London, Britain, May 8, 2021. REUTERS/Henry Nicholls
TT
20

Britain to Give New Tech Regulator Statutory Powers

Skyscrapers in The City of London financial district are seen from City Hall in London, Britain, May 8, 2021. REUTERS/Henry Nicholls
Skyscrapers in The City of London financial district are seen from City Hall in London, Britain, May 8, 2021. REUTERS/Henry Nicholls

Britain will give statutory powers to a new technology regulator so it can enforce pro-competition rules and prevent tech giants including Google and Facebook from using their dominance to push out smaller firms and disadvantage consumers.

“The government will introduce legislation to put the Digital Markets Unit on a statutory footing in due course,” the Department of Culture, Media and Sport (DCMS) said in a statement on Thursday.

A spokesperson for the DCMS declined to comment when asked if legislation will be included in the government's program for the coming year, due to be outlined in the Queen's Speech on May 10.

The Digital Markets Unit (DMU) was launched in non-statutory form within the Competition and Markets Authority (CMA) last year to make sure tech companies don't abuse their market power.

The change would give the unit stronger enforcement powers, Reuters reported.

The DCMS said its proposals would make it easier for people to switch between Apple iOS and Android phones or between social media accounts without losing their data.

Smartphone users could get more choice of search engines and social media platforms and more control over how their data is used by companies.

The DCMS said small and medium-size businesses would get better pricing from big tech firms that they use to trade online. The firms would need to warn smaller companies about changes to their algorithms that drive traffic and revenues.

The proposed measures would also make sure news publishers are able to monetize their online news content and be paid fairly for it. The DMU would have the power to step in to solve pricing disputes between news outlets and platforms. App developers would be able to sell their apps on fairer and more transparent terms.

“We want to level the playing field and we are arming this new tech regulator with a range of powers to generate lower prices, better choice and more control for consumers while backing content creators, innovators and publishers, including in our vital news industry,” said digital minister Chris Philp.

The DCMS said the DMU will be able to levy fines of up to 10% of annual global turnover.



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.