Meta Gets 11 EU Complaints Over Use of Personal Data to Train AI Models

Artificial Intelligence words are seen in this illustration taken March 31, 2023. REUTERS/Dado Ruvic/Illustration
Artificial Intelligence words are seen in this illustration taken March 31, 2023. REUTERS/Dado Ruvic/Illustration
TT

Meta Gets 11 EU Complaints Over Use of Personal Data to Train AI Models

Artificial Intelligence words are seen in this illustration taken March 31, 2023. REUTERS/Dado Ruvic/Illustration
Artificial Intelligence words are seen in this illustration taken March 31, 2023. REUTERS/Dado Ruvic/Illustration

Meta Platforms was hit with 11 complaints on Thursday over proposed changes that would see it use personal data to train its artificial intelligence models without asking for consent, which may breach European Union privacy rules.
Advocacy group NOYB (none of your business) urged national privacy watchdogs to act immediately to halt such use, saying recent changes in Meta's privacy policy, which come into force on June 26, would allow it to use years of personal posts, private images or online tracking data for its AI technology, Reuters said.
NOYB has already filed several complaints against Meta and other Big Tech companies over alleged breaches of the EU's General Data Protection Regulation (GDPR) which threatens fines up to 4% of a company's total global turnover for violations.
Meta has cited a legitimate interest for using users' data to train and develop its generative AI models and other AI tools, which can be shared with third parties.
NOYB founder Max Schrems said in a statement that Europe's top court had already ruled on the issue in 2021.
"The European Court of Justice (CJEU) has already made it clear that Meta has no 'legitimate interest' to override users' right to data protection when it comes to advertising," he said.
"Yet the company is trying to use the same arguments for the training of undefined 'AI technology'. It seems that Meta is once again blatantly ignoring the judgements of the CJEU," Schrems said, adding that opting out was extremely complicated.
"Shifting the responsibility to the user is completely absurd. The law requires Meta to get opt-in consent, not to provide a hidden and misleading opt-out form," Schrems said, adding: "If Meta wants to use your data, they have to ask for your permission. Instead, they made users beg to be excluded".
NOYB asked data protection authorities in Austria, Belgium, France, Germany, Greece, Italy, Ireland, the Netherlands, Norway, Poland and Spain to launch an urgency procedure because of the imminent changes.



OpenAI, Anthropic Sign Deals with US Govt for AI Research and Testing

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
TT

OpenAI, Anthropic Sign Deals with US Govt for AI Research and Testing

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)

AI startups OpenAI and Anthropic have signed deals with the United States government for research, testing and evaluation of their artificial intelligence models, the US Artificial Intelligence Safety Institute said on Thursday.

The first-of-their-kind agreements come at a time when the companies are facing regulatory scrutiny over safe and ethical use of AI technologies.

California legislators are set to vote on a bill as soon as this week to broadly regulate how AI is developed and deployed in the state.

Under the deals, the US AI Safety Institute will have access to major new models from both OpenAI and Anthropic prior to and following their public release.

The agreements will also enable collaborative research to evaluate capabilities of the AI models and risks associated with them, Reuters reported.

"We believe the institute has a critical role to play in defining US leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on," said Jason Kwon, chief strategy officer at ChatGPT maker OpenAI.

Anthropic, which is backed by Amazon and Alphabet , did not immediately respond to a Reuters request for comment.

"These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI," said Elizabeth Kelly, director of the US AI Safety Institute.

The institute, a part of the US commerce department's National Institute of Standards and Technology (NIST), will also collaborate with the U.K. AI Safety Institute and provide feedback to the companies on potential safety improvements.

The US AI Safety Institute was launched last year as part of an executive order by President Joe Biden's administration to evaluate known and emerging risks of artificial intelligence models.