Tencent to Release ‘Dungeon and Fighter’ Mobile Game in May 

A Tencent sign is seen at the World Internet Conference (WIC) in Wuzhen, Zhejiang province, China, October 20, 2019. (Reuters)
A Tencent sign is seen at the World Internet Conference (WIC) in Wuzhen, Zhejiang province, China, October 20, 2019. (Reuters)
TT

Tencent to Release ‘Dungeon and Fighter’ Mobile Game in May 

A Tencent sign is seen at the World Internet Conference (WIC) in Wuzhen, Zhejiang province, China, October 20, 2019. (Reuters)
A Tencent sign is seen at the World Internet Conference (WIC) in Wuzhen, Zhejiang province, China, October 20, 2019. (Reuters)

Chinese tech giant Tencent Holdings said on Monday it will release its much-anticipated "Dungeon and Fighter" mobile game on May 21 after seven years of development.

Officially named "Dungeon and Fighter: Origin", the action game, developed by Korean firm Nexon, is a mobile adaptation of the "Dungeon and Fighter" computer game, one of the world's most profitable computer games.

Tencent's shares rose about 4.5% on Monday morning.

The game was already released in South Korea in 2022 and became an instant hit. But its China release was delayed after the government cracked down on the gaming industry between 2018 and 2022.

In a February note, investment bank Jefferies expected the game to "secure a top 5 spot in revenue rankings" in China and to potentially generate between $600 million to $1.1 billion in annualized revenues there over time. But the bank expects a "cautious approach to engagement and monetization" during its initial launch.

Last month, Tencent conducted a closed test with 300,000 players and had delivered strong results. In a note this month, HSBC wrote: "Testing for DnFm yielded solid performance in metrics like [daily active users], retention rate and user's paying propensity."



OpenAI, Anthropic Sign Deals with US Govt for AI Research and Testing

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
TT

OpenAI, Anthropic Sign Deals with US Govt for AI Research and Testing

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)

AI startups OpenAI and Anthropic have signed deals with the United States government for research, testing and evaluation of their artificial intelligence models, the US Artificial Intelligence Safety Institute said on Thursday.

The first-of-their-kind agreements come at a time when the companies are facing regulatory scrutiny over safe and ethical use of AI technologies.

California legislators are set to vote on a bill as soon as this week to broadly regulate how AI is developed and deployed in the state.

Under the deals, the US AI Safety Institute will have access to major new models from both OpenAI and Anthropic prior to and following their public release.

The agreements will also enable collaborative research to evaluate capabilities of the AI models and risks associated with them, Reuters reported.

"We believe the institute has a critical role to play in defining US leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on," said Jason Kwon, chief strategy officer at ChatGPT maker OpenAI.

Anthropic, which is backed by Amazon and Alphabet , did not immediately respond to a Reuters request for comment.

"These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI," said Elizabeth Kelly, director of the US AI Safety Institute.

The institute, a part of the US commerce department's National Institute of Standards and Technology (NIST), will also collaborate with the U.K. AI Safety Institute and provide feedback to the companies on potential safety improvements.

The US AI Safety Institute was launched last year as part of an executive order by President Joe Biden's administration to evaluate known and emerging risks of artificial intelligence models.