Australia to Amend Law to Regulate Digital Payments Like Apple, Google Pay 

An illuminated Google logo is seen inside an office building in Zurich, Switzerland December 5, 2018. (Reuters)
An illuminated Google logo is seen inside an office building in Zurich, Switzerland December 5, 2018. (Reuters)
TT

Australia to Amend Law to Regulate Digital Payments Like Apple, Google Pay 

An illuminated Google logo is seen inside an office building in Zurich, Switzerland December 5, 2018. (Reuters)
An illuminated Google logo is seen inside an office building in Zurich, Switzerland December 5, 2018. (Reuters)

Australia's government said on Monday it would bring Apple Pay, Google Pay and other digital payment services under the same regulatory umbrella as credit cards and other payments as part of legislation set to be introduced to parliament this week.

Digital wallets from the likes of Apple, Google and WeChat developer Tencent have exploded in popularity but are not captured by Australian payments law.

The legislation, first flagged last month, will broaden the legislation that empowers the Reserve Bank of Australia to regulate payments so that it applies to new and emerging technology.

"We are modernizing Australia's payments system to ensure it meets the needs of our economy now and into the future," Treasurer Jim Chalmers said in a statement.

"We want to make sure the increasing use of digital payments occurs in a way that helps promote greater competition, innovation and productivity across our entire economy."

Legislation is set to be introduced on Wednesday or Thursday, according to Chalmers' office.

Regulators are responding to the rapid growth of digital wallets, especially among the young. Transactions from a digital wallet hit 35% of all card transactions in the June quarter, up from 10% in early 2020.

Two-thirds of Australians aged between 18 and 29 use mobile payments. Before the pandemic it was less than 20%.

The amendments will also give a relevant minister power to subject a system or platform to special oversight in the event it presents a risk of "national significance."



OpenAI, Anthropic Sign Deals with US Govt for AI Research and Testing

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
TT

OpenAI, Anthropic Sign Deals with US Govt for AI Research and Testing

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)

AI startups OpenAI and Anthropic have signed deals with the United States government for research, testing and evaluation of their artificial intelligence models, the US Artificial Intelligence Safety Institute said on Thursday.

The first-of-their-kind agreements come at a time when the companies are facing regulatory scrutiny over safe and ethical use of AI technologies.

California legislators are set to vote on a bill as soon as this week to broadly regulate how AI is developed and deployed in the state.

Under the deals, the US AI Safety Institute will have access to major new models from both OpenAI and Anthropic prior to and following their public release.

The agreements will also enable collaborative research to evaluate capabilities of the AI models and risks associated with them, Reuters reported.

"We believe the institute has a critical role to play in defining US leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on," said Jason Kwon, chief strategy officer at ChatGPT maker OpenAI.

Anthropic, which is backed by Amazon and Alphabet , did not immediately respond to a Reuters request for comment.

"These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI," said Elizabeth Kelly, director of the US AI Safety Institute.

The institute, a part of the US commerce department's National Institute of Standards and Technology (NIST), will also collaborate with the U.K. AI Safety Institute and provide feedback to the companies on potential safety improvements.

The US AI Safety Institute was launched last year as part of an executive order by President Joe Biden's administration to evaluate known and emerging risks of artificial intelligence models.