Court: Google Broke Australian Law over Location Data Collection

The google logo. Reuters file photo
The google logo. Reuters file photo
TT

Court: Google Broke Australian Law over Location Data Collection

The google logo. Reuters file photo
The google logo. Reuters file photo

Google violated Australian law by misleading users of Android mobile devices about the use of their location data, a court ruled Friday in a landmark decision against the global digital giant.

The US company faces potential fines of "many millions" of dollars over the case, which was brought by the Australian Competition and Consumer Commission (ACCC), the regulators' chief Rod Sims said.

The federal court found that in 2017 and 2018 Google misled some users of phones and tablets featuring its Android operating system by collecting their personally identifiable location information even when they had opted out of sharing "Location History" data.

It said Google notably failed to make clear that allowing tracking of "Web & App Activity" under a separate setting on their devices included the location details.

Various studies around the world have documented the problem of location data being gathered through Android and iPhone devices without users' knowledge or explicit permission.

Such data can be highly valuable to advertisers trying to pitch location-related products and services.

But the ACCC's Sims said Friday's court decision was "the first ruling of its type in the world in relation to these location data issues."

"This is an important victory for consumers, especially anyone concerned about their privacy online, as the court's decision sends a strong message to Google and others that big businesses must not mislead their customers," he said.

"Today's decision is an important step to make sure digital platforms are upfront with consumers about what is happening with their data and what they can do to protect it."

In his ruling, Federal Court Judge Thomas Thawley "partially" accepted the ACCC case against Google, noting that the company's "conduct would not have misled all reasonable users" of its service.

But he added that Google's action "misled or was likely to mislead some reasonable users" and that "the number or proportion of reasonable users who were misled, or were likely to have been misled, does not matter" in establishing contraventions of the law.

The ACCC said it would seek "pecuniary penalties" that could amount to US$850,000 per breach, potentially totaling "many millions" of dollars, national broadcaster ABC quoted Sims as saying.
Google protested the ruling, which it noted had rejected some of the ACCC's "broad claims" against it and concerned only a narrowly defined class of users.

"We disagree with the remaining findings and are currently reviewing our options, including a possible appeal," AFP quoted a spokesperson as saying.

"We provide robust controls for location data and are always looking to do more -- for example we recently introduced auto delete options for Location History, making it even easier to control your data," they said.

Last year, Google was targeted alongside Facebook by the ACCC for failing to compensate Australian news organizations for content posted to their platforms.

The dispute led to landmark legislation requiring digital firms to pay for news and resulted in Google and Facebook signing deals worth millions of dollars to Australian media companies.



OpenAI, Anthropic Sign Deals with US Govt for AI Research and Testing

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
TT

OpenAI, Anthropic Sign Deals with US Govt for AI Research and Testing

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)

AI startups OpenAI and Anthropic have signed deals with the United States government for research, testing and evaluation of their artificial intelligence models, the US Artificial Intelligence Safety Institute said on Thursday.

The first-of-their-kind agreements come at a time when the companies are facing regulatory scrutiny over safe and ethical use of AI technologies.

California legislators are set to vote on a bill as soon as this week to broadly regulate how AI is developed and deployed in the state.

Under the deals, the US AI Safety Institute will have access to major new models from both OpenAI and Anthropic prior to and following their public release.

The agreements will also enable collaborative research to evaluate capabilities of the AI models and risks associated with them, Reuters reported.

"We believe the institute has a critical role to play in defining US leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on," said Jason Kwon, chief strategy officer at ChatGPT maker OpenAI.

Anthropic, which is backed by Amazon and Alphabet , did not immediately respond to a Reuters request for comment.

"These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI," said Elizabeth Kelly, director of the US AI Safety Institute.

The institute, a part of the US commerce department's National Institute of Standards and Technology (NIST), will also collaborate with the U.K. AI Safety Institute and provide feedback to the companies on potential safety improvements.

The US AI Safety Institute was launched last year as part of an executive order by President Joe Biden's administration to evaluate known and emerging risks of artificial intelligence models.