Five Things to Know about the EU's Landmark Digital Act

The European Union in September named six gatekeepers including Apple and Google that must adhere to the rules. Kenzo TRIBOUILLARD / AFP/File
The European Union in September named six gatekeepers including Apple and Google that must adhere to the rules. Kenzo TRIBOUILLARD / AFP/File
TT

Five Things to Know about the EU's Landmark Digital Act

The European Union in September named six gatekeepers including Apple and Google that must adhere to the rules. Kenzo TRIBOUILLARD / AFP/File
The European Union in September named six gatekeepers including Apple and Google that must adhere to the rules. Kenzo TRIBOUILLARD / AFP/File

The world's biggest digital companies will be forced to comply from Thursday with strict EU rules that Brussels hopes will make the online market fairer for all.
The European Union in September named six so-called gatekeepers and 22 of their platforms including Facebook, Instagram and LinkedIn that must adhere to the rules: Google's Alphabet, Amazon, Apple, TikTok parent ByteDance, Meta and Microsoft.
That list is expected to grow after online travel agent Booking and Elon Musk's X notified the European Commission last week that they met the criteria to be considered gatekeepers.
Here are five rules included in the law that will force the titans to change their ways.
Save the start-ups
Big tech companies make billions of dollars in profit every year and some of the windfall goes to scooping up start-ups and innovators.
This rankles authorities, who accuse the giants of using their war chests to snuff out potential rivals before they become a threat.
Under the new rules all buyouts, no matter how small, will have to be notified to the commission, the EU's executive arm based in Brussels.
The commission also acts as the EU's powerful competition regulator.
Messaging unity
After multiple scandals that hit Meta-owned Facebook, many users chose to swap the giant's Messenger or WhatsApp messaging services for alternatives, such as Signal or Telegram.
Yet the market power of Meta's services remains strong, making it difficult for WhatsApp dissenters to keep messaging links with family and friends.
To solve this, the DMA imposes interoperability between messaging apps, all while demanding that communications remain encrypted from user to user.
Fair shopping on Amazon
Amazon is a major shopping platform for thousands of companies to sell their wares online. But suspicions are rife that the online giant abuses its role as a marketplace to better position its own products as a retailer.
The DMA will ban this conflict of interest, as well as demand that the gatekeepers share key information with business customers.
Open the App Store
Around the world, Apple has defended the sanctity of its App Store, barring companies from using their own payment systems or for apps to be downloaded outside the Apple store.
Despite its warnings that opening up iPhones would pose a security threat, the DMA will force Apple to give ground on both those fronts.
Apple has said it will comply, but app developers and some digital companies, including Swedish music streaming giant Spotify, have accused it of acting in bad faith with changes that create prohibitive new costs for rivals.
Failure to comply with the DMA could carry fines in the billions of dollars -- big enough even for the world's biggest company by market value to pay attention.
Repeat offenders would see fines of 20 percent of their global turnover.
Any gatekeeper platform that locks in customers to use pre-installed services, such as a web browser, mapping or weather information, will also face fines.
Ad transparency
Google's search engine and Meta's Facebook and Instagram are the world's biggest online advertisers, a status that critics say the companies abuse by accumulating valuable data about customers and keeping it to themselves.
The DMA will force the tech giants to reveal much more to advertisers and publishers on how their ads work and on an ad's actual effectiveness.
This will make companies less beholden to Google or Facebook for understanding their customers and potentially encourage firms to get their message out in new ways.



OpenAI, Anthropic Sign Deals with US Govt for AI Research and Testing

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
TT

OpenAI, Anthropic Sign Deals with US Govt for AI Research and Testing

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)

AI startups OpenAI and Anthropic have signed deals with the United States government for research, testing and evaluation of their artificial intelligence models, the US Artificial Intelligence Safety Institute said on Thursday.

The first-of-their-kind agreements come at a time when the companies are facing regulatory scrutiny over safe and ethical use of AI technologies.

California legislators are set to vote on a bill as soon as this week to broadly regulate how AI is developed and deployed in the state.

Under the deals, the US AI Safety Institute will have access to major new models from both OpenAI and Anthropic prior to and following their public release.

The agreements will also enable collaborative research to evaluate capabilities of the AI models and risks associated with them, Reuters reported.

"We believe the institute has a critical role to play in defining US leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on," said Jason Kwon, chief strategy officer at ChatGPT maker OpenAI.

Anthropic, which is backed by Amazon and Alphabet , did not immediately respond to a Reuters request for comment.

"These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI," said Elizabeth Kelly, director of the US AI Safety Institute.

The institute, a part of the US commerce department's National Institute of Standards and Technology (NIST), will also collaborate with the U.K. AI Safety Institute and provide feedback to the companies on potential safety improvements.

The US AI Safety Institute was launched last year as part of an executive order by President Joe Biden's administration to evaluate known and emerging risks of artificial intelligence models.