Samsung's Galaxy XR Headset to Take on Apple with Help from Google and Qualcomm 

Visitors walk past the Samsung Electronics booth during the Korea Electronics Show 2025 at the COEX convention and exhibition center in Seoul on October 22, 2025. (AFP)
Visitors walk past the Samsung Electronics booth during the Korea Electronics Show 2025 at the COEX convention and exhibition center in Seoul on October 22, 2025. (AFP)
TT

Samsung's Galaxy XR Headset to Take on Apple with Help from Google and Qualcomm 

Visitors walk past the Samsung Electronics booth during the Korea Electronics Show 2025 at the COEX convention and exhibition center in Seoul on October 22, 2025. (AFP)
Visitors walk past the Samsung Electronics booth during the Korea Electronics Show 2025 at the COEX convention and exhibition center in Seoul on October 22, 2025. (AFP)

Samsung Electronics released its Galaxy XR extended reality headset on Tuesday, counting on AI features from Google to propel it into the nascent and uncertain market of computing-on-your-face that is dominated by Meta and Apple.

The headset, resembling those made by others such as Meta, will cost $1,799, or about half of what Apple charges for its Vision Pro headset.

It is the first of a family of new devices, powered by the Android XR operating system and artificial intelligence, in a long-term partnership with Alphabet's Google and Qualcomm.

"There's a whole journey ahead of us in terms of other devices and form factors," said Google's vice president of AR/XR Sharham Izadi in an interview ahead of the launch.

Up next will be the release of lighter eyeglasses, executives said, declining to elaborate. Samsung has announced partnerships with Warby Parker and South Korea's Gentle Monster luxury eyewear.

The race to find new form factors for entertainment and computing, underpinned by AI, has fueled a battle among the biggest technology companies. Instagram-owner Meta overwhelmingly dominates the VR headset industry with about an 80% market share, with Apple trailing behind.

ChatGPT-maker OpenAI is also diving into the market and spent $6.5 billion to buy iPhone designer Jony Ive's hardware startup io Products in May to figure out devices in the AI age.

Samsung has studied the extended reality segment for the past 10 years, and it was not until about four years ago that the company approached Google to jointly develop the project, codenamed "Moohan," meaning "infinite" in Korean, said Jay Kim, executive vice president at Samsung's mobile division.

"We have been agonizing over when to bring the product to the market, and considering various factors such as technology evolution and market situation, we believe that now is the best timing," he said at a briefing in Seoul on Wednesday.

USING GOOGLE AI STRENGTH

The long-awaited Samsung Galaxy XR, first demonstrated last year, combines virtual reality and mixed reality features. The goggles immerse users watching videos, such as on Alphabet's YouTube, or playing games and viewing pictures, while also allowing users to interact with their surroundings.

The latter feature takes advantage of Google's Gemini service, which can analyze what users are seeing and offer directions or information about real-world objects by looking and circling objects with their fingers.

In an interview last week, executives from Google and Samsung discussed how they believe extended reality headsets, which have yet to ignite mass consumer interest, would benefit greatly from the application of Google's powerful multimodal AI features throughout the device that can process information from different types of data such as text, photos and videos.

It's a set of software capabilities that Apple has yet to demonstrate, despite rolling out an updated Vision Pro with a more powerful chip.

"Google entering the fray again changes the dynamic in the ecosystem," said Anshel Sag, principal analyst at Moor Insights & Strategy, noting that Google's software added $1,000 in value to the device by some estimates. "Google really wants people to get the full experience of Gemini when using this headset."

Customers who buy the device this year will receive a bundle of free services including 12 months of access to Google AI Pro, YouTube Premium, Google Play Pass and other specialized XR content, the companies said.

The prototype for AI-enhanced goggles was ready by the time Apple had launched its Vision Pro headset in 2024, executives said, as they sought to enhance existing applications like YouTube and Google Photos and Google Maps, while creating new immersive experiences.

Like many first generation technologies, it attempts to do multiple things that could have consumer and enterprise applications.

Qualcomm is providing its Snapdragon XR2+ Gen 2 chip to power the headset.

DIFFICULT MARKET

Many tech CEOs have been seduced by what they say is the next big thing in personal computing, but the market remains tiny by tech standards.

Research firm Gartner estimated the global Head-Mounted Display market is expected to rise by 2.6% from this year to $7.27 billion next year. Lighter, eyeglass-type AI devices such as Meta's smartglasses made in collaboration with EssilorLuxottica Ray-Bans are expected to drive most of this growth.

Despite the expanding competitive landscape, the global virtual reality market, which includes so-called "mixed reality" headsets launching more recently, has faced three consecutive years of decline. Weakening again, shipments in 2025 are expected to fall 20% year on year, according to research firm Counterpoint.

"With a potentially more competitive price point than Apple’s Vision Pro, Samsung’s Project Moohan headset could emerge as a strong contender in the premium VR segment, particularly within the enterprise market," Counterpoint senior analyst Flora Tang.

The Galaxy XR is the first Android XR device. But Samsung has dabbled with face-mounted computing devices dating back a decade, involving slipping a smartphone into a headset, called the Gear VR, in partnership with VR headset maker Oculus. Meta acquired Oculus in 2014.



Ping-Pong Robot Ace Makes History by Beating Top-Level Human Players

Sony AI autonomous robot Ace returns a shot back against its human opponent, table tennis player Yamato Kawamata, during a match in December 2025, as seen in this photograph released on April 22, 2026. (Sony AI/Handout via Reuters)
Sony AI autonomous robot Ace returns a shot back against its human opponent, table tennis player Yamato Kawamata, during a match in December 2025, as seen in this photograph released on April 22, 2026. (Sony AI/Handout via Reuters)
TT

Ping-Pong Robot Ace Makes History by Beating Top-Level Human Players

Sony AI autonomous robot Ace returns a shot back against its human opponent, table tennis player Yamato Kawamata, during a match in December 2025, as seen in this photograph released on April 22, 2026. (Sony AI/Handout via Reuters)
Sony AI autonomous robot Ace returns a shot back against its human opponent, table tennis player Yamato Kawamata, during a match in December 2025, as seen in this photograph released on April 22, 2026. (Sony AI/Handout via Reuters)

An autonomous robot ping-pong player dubbed Ace has achieved a milestone for AI and robotics in Tokyo by competing against and sometimes defeating top-level human players at table tennis, a feat that could presage an array of other applications for similarly adept robots.

Ace, created by the Japanese company Sony's AI research division, is the first robot to attain expert-level performance in a competitive physical sport, one that requires rapid decisions and precision execution, the project's leader said. Ace did so by employing high-speed perception, AI-based control and a state-of-the-art robotic system.

There have been various ping-pong-playing robots since 1983, but until now they were unable to rival highly skilled human competitors. Ace changed that with its performances against human elite-level and professional players in matches following the rules of the International Table Tennis Federation, the sport's governing body, and officiated by licensed umpires.

"Unlike computer games, where prior AI systems surpass human experts, physical and real-time sports such as table tennis remain a major open challenge due to their requirements for fast, precise and adversarial interactions near obstacles and at the edge ‌of human reaction ‌time," said Peter Dürr, director of Sony AI Zurich and leader for Sony AI's project Ace.

The ‌project's ⁠goal was not ⁠only to compete at table tennis but to develop insights into how robots can perceive, plan and act with human-like speed and precision in dynamic environments, Dürr said.

"The success of Ace, with its perception system and learning-based control algorithm, suggests that similar techniques could be applied to other areas requiring fast, real-time control and human interaction - such as manufacturing and service robotics, as well as applications across sports, entertainment and safety-critical physical domains," said Dürr, lead author of a study describing Ace's achievements published on Wednesday in the journal Nature.

In matches detailed in the study, Ace in April 2025 won three out of five versus elite players and lost two matches against professional players, the top skill level in the ⁠sport. Sony AI said that since then Ace beat professional players in December 2025 and last ‌month.

Companies worldwide are making advances with robots. On Sunday, for instance, robots outran human ‌runners in a half-marathon race in Beijing.

'A BLUR TO THE HUMAN EYE'

AI systems already have excelled in digital domains in strategy games such as ‌chess and Go and at complex video games.

While video games take place in simulated environments, table tennis requires rapid decision-making, precise ‌physical execution and continuous adaptation to an unpredictable opponent, Dürr said. The ball moves at high speeds with complex spins and trajectories, pushing humans and robots to operate at the limits of sensing, prediction and motor control, Dürr said.

Ace's architecture integrates nine synchronized cameras and three vision systems to track a spinning ball with exceptional accuracy and speedy processing time.

"This is fast enough to capture motion that would be a blur to the human eye," Dürr ‌said.

The researchers developed a custom robot platform featuring eight joints. This was, Dürr said, the minimum number necessary to execute competitive shots: three for the racket's position, two for its orientation ⁠and three for the shot's speed ⁠and strength.

Mayuka Taira, a professional table tennis player who lost a match to Ace last December, said in comments provided by Sony AI that the robot's strengths "are that it is very hard to predict, and it shows no emotion."

"Because you can't read its reactions, it's impossible to sense what kind of shots it dislikes or struggles with, and that makes it even more difficult to play against," Taira said.

Rui Takenaka, an elite-level player who has won and lost matches against Ace, said in comments provided by Sony AI: "When it came to my serve, if I used a serve with complex spin, Ace also returned the ball with complex spin, which made it difficult for me. But when I used a simple serve - what we call a knuckle serve - Ace returned a simpler ball. That made it easier for me to attack on the third shot, and I think that was the key reason why I was able to win."

Ace has room for improvement, Dürr said.

"Ace has a superhuman ability to read the spin of incoming balls, and superhuman reaction time. As it learns to play not from watching humans play, but is trained by itself in simulation, it also reacts differently from human players and creates surprising situations," Dürr said. "At the same time, professional human athletes are very good at adapting to their opponent and finding weaknesses, which is an area that we are working on."


ICAIRE Launches Global ‘AI Glossary Challenge’ to Promote Responsible Innovation

The initiative aims to promote the ethical use of modern technologies across international contexts
The initiative aims to promote the ethical use of modern technologies across international contexts
TT

ICAIRE Launches Global ‘AI Glossary Challenge’ to Promote Responsible Innovation

The initiative aims to promote the ethical use of modern technologies across international contexts
The initiative aims to promote the ethical use of modern technologies across international contexts

The International Center for AI Research and Ethics (ICAIRE), a Riyadh-based UNESCO affiliate, has launched the AI Glossary Challenge, inviting researchers, students, and practitioners to develop knowledge tools that support a responsible AI ecosystem.

By standardizing concepts and establishing a shared knowledge base, the initiative aims to promote the ethical use of modern technologies across international contexts.

The challenge comprises three specialized tracks: AI Glossary Tools for developing digital applications such as APIs and governance dashboards; Dataset Creation for building high-quality, bias-free cultural datasets; and Cultural Hallucinations Tools to detect and interpret contextual errors in large language models, enhancing their global adaptability.

Hosted on the Kaggle platform, the competition offers prizes to winning teams to foster a specialized community dedicated to AI ethics.


Florida Launches Criminal Probe into OpenAI and ChatGPT Over Deadly Shooting

This illustration photograph taken in Mulhouse, eastern France on February 11, 2025, shows the logo of OpenAI's artificial intelligence chatbot ChatGPT. (AFP)
This illustration photograph taken in Mulhouse, eastern France on February 11, 2025, shows the logo of OpenAI's artificial intelligence chatbot ChatGPT. (AFP)
TT

Florida Launches Criminal Probe into OpenAI and ChatGPT Over Deadly Shooting

This illustration photograph taken in Mulhouse, eastern France on February 11, 2025, shows the logo of OpenAI's artificial intelligence chatbot ChatGPT. (AFP)
This illustration photograph taken in Mulhouse, eastern France on February 11, 2025, shows the logo of OpenAI's artificial intelligence chatbot ChatGPT. (AFP)

Florida ‌Attorney General James Uthmeier said on Tuesday the state was launching a criminal probe into OpenAI and its artificial intelligence app ChatGPT over a deadly shooting last year that killed two people at Florida State University.

A gunman killed two people and wounded six others at Florida State University in April last year before he was shot by officers and hospitalized. The suspect was charged with multiple counts of murder and attempted ‌murder.

"The chatbot advised ‌the shooter on what type ‌of ⁠gun to use, on ⁠which ammo went with which gun, on whether or not a gun would be useful at short range," Uthmeier said in a press briefing.

"If it was a person on the other end of that screen, we would be charging them with murder."

Uthmeier's ⁠office said the investigation will determine whether "OpenAI ‌bears criminal responsibility for ‌ChatGPT's actions in the shooting."

The Office of Statewide Prosecution subpoenaed OpenAI ‌for some information and records, it added.

The rise ‌of AI has fed a host of concerns ranging from worries that electricity demand by data centers could raise power prices for consumers, to fears that the technology could cost ‌workers their jobs or be used to disrupt the democratic process, turbocharge fraud ⁠or help ⁠people plan criminal activities.

An OpenAI spokeswoman told US media that the shooting was a tragedy, but the company had no responsibility. The spokeswoman said that after learning of the incident, OpenAI identified a ChatGPT account believed to be associated with the suspect and "proactively shared this information with law enforcement."

"In this case, ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity," the OpenAI spokeswoman said.