New Sensors for Self-driving Cars

Soroush Salehian and Mina Rezk, of Aeva, a Silicon Valley start-up making new guidance systems for driverless vehicles. Credit Jason Henry for The New York Times
Soroush Salehian and Mina Rezk, of Aeva, a Silicon Valley start-up making new guidance systems for driverless vehicles. Credit Jason Henry for The New York Times
TT

New Sensors for Self-driving Cars

Soroush Salehian and Mina Rezk, of Aeva, a Silicon Valley start-up making new guidance systems for driverless vehicles. Credit Jason Henry for The New York Times
Soroush Salehian and Mina Rezk, of Aeva, a Silicon Valley start-up making new guidance systems for driverless vehicles. Credit Jason Henry for The New York Times

Soroush Salehian raised both arms and spun in circles as if celebrating a touchdown.

Across the room, perched on a tripod, a small black device monitored this little dance and streamed it to a nearby laptop. Mr. Salehian appeared as a collection of tiny colored dots, some red, some blue, some green. Each dot showed the precise distance to a particular point on his body, while the colors showed the speed of his movements. As his right arm spun forward, it turned blue. His left arm, spinning away, turned red.

“See how the arms are different?” said his business partner, Mina Rezk, pointing at the laptop. “It’s measuring different velocities.”

Messrs. Salehian and Rezk are the founders of a new Silicon Valley start-up called Aeva, and their small black device is designed for self-driving cars. The veterans of Apple’s secretive Special Projects Group aim to give these autonomous vehicles a more complete, detailed and reliable view of the world around them — something that is essential to their evolution.

Today’s driverless cars under development at companies like General Motors, Toyota, Uber and the Google spinoff Waymo track their surroundings using a wide variety of sensors, including cameras, radar, GPS antennas and lidar (short for “light detection and ranging”) devices that measure distances using pulses of light.

But there are gaps in the way these sensors operate, and combining their disparate streams of data is difficult. Aeva’s prototype — a breed of lidar that measures distances more accurately and also captures speed — aims to fill several of these sizable holes.

“I don’t even think of this as a new kind of lidar,” said Tarin Ziyaee, co-founder and chief technology officer at the self-driving taxi start-up Voyage, who has seen the Aeva prototype. “It’s a whole different animal.”

Founded in January and funded by the Silicon Valley venture capital firm Lux Capital, among others, Aeva joins a widespread effort to build more effective sensors for autonomous vehicles, a trend that extends from start-ups like Luminar, Echodyne and Metawave to established hardware makers like the German multinational Robert Bosch.

The company’s name, Aeva, is a play on “Eve,” the name of the robot in the Pixar movie “WALL-E.”

The market for autonomous vehicles will grow to $42 billion by 2025, according to research by the Boston Consulting Group. But for that to happen, the vehicles will need new and more powerful sensors. Today’s autonomous cars are ill prepared for high-speed driving, bad weather and other common situations.

The recent improvements in self-driving cars coincided with the improvements offered by new lidar sensors from a Silicon Valley company called Velodyne. These sensors gave cars a way of measuring distances to nearby vehicles, pedestrians and other objects. They also provided Google and other companies with a way of mapping urban roadways in three dimensions, so that cars will know exactly where they are at any given moment — something GPS cannot always provide.

But these lidar sensors have additional shortcomings. They can gather information only about objects that are relatively close to them, which limits how fast the cars can travel. Their measurements aren’t always detailed enough to distinguish one object from another. And when multiple driverless cars are close together, their signals can become garbled.

Other devices can pick up some of slack. Cameras are a better way of identifying pedestrians and street signs, for example, and radar works over longer distances. That’s why today’s self-driving cars track their surroundings through so many different sensors. But despite this wide array of hardware — which can cost hundreds of thousands of dollars per vehicle — even the best autonomous vehicles still have trouble in so many situations that humans can navigate with ease.

With their new sensor, Messrs. Salehian and Rezk are working to change that. Mr. Rezk is an engineer who designed optical hardware for Nikon, and presumably, he was among those who handled optical sensors for Apple’s driverless car project, though he and Mr. Salehian declined to say which “special project” they worked on at the company. They left Apple late last year.

Where current lidar sensors send out individual pulses, Aeva’s device sends out a continuous wave of light. By reading the way this far more complex signal bounces off surrounding objects, Mr. Rezk said, the device can capture a far more detailed image while also tracking velocity. You can think of it as a cross between lidar, which is so good at measuring depth, and radar, which is so good at measuring speed.

Mr. Rezk also said the device’s continuous wave would provide greater range and resolution than existing lidar devices, deal better with weather and highly reflective objects like bridge railings, and avoid interference with other optical sensors.

Cars will continue to use multiple kinds of sensors, in part because redundancy helps ensure that these cars are safe. But Aeva aims to give these cars a better view of the world from a smaller and less expensive set of sensors.

Researchers at the University of California, Berkeley, have built similar hardware, and companies like Velodyne and the start-ups Oryx Vision and Quanergy say they are exploring similar ideas. Like these efforts, the Aeva prototype is still under development, and the company plans to sell devices next year. But it shows how autonomous car sensors need to evolve — and that they are indeed evolving.

Ultimately, new sensors will allow cars to make better decisions.

The New York Times



Meta Criticizes EU Antitrust Move Against WhatsApp Block on AI Rivals

(FILES) This illustration photograph taken on December 1, 2025, shows the logo of WhatsApp displayed on a smartphone's screen, in Frankfurt am Main, western Germany. (Photo by Kirill KUDRYAVTSEV / AFP)
(FILES) This illustration photograph taken on December 1, 2025, shows the logo of WhatsApp displayed on a smartphone's screen, in Frankfurt am Main, western Germany. (Photo by Kirill KUDRYAVTSEV / AFP)
TT

Meta Criticizes EU Antitrust Move Against WhatsApp Block on AI Rivals

(FILES) This illustration photograph taken on December 1, 2025, shows the logo of WhatsApp displayed on a smartphone's screen, in Frankfurt am Main, western Germany. (Photo by Kirill KUDRYAVTSEV / AFP)
(FILES) This illustration photograph taken on December 1, 2025, shows the logo of WhatsApp displayed on a smartphone's screen, in Frankfurt am Main, western Germany. (Photo by Kirill KUDRYAVTSEV / AFP)

Meta Platforms on Monday criticized EU regulators after they charged the US tech giant with breaching antitrust rules and threaten to halt its block on ⁠AI rivals on its messaging service WhatsApp.

"The facts are that there is no reason for ⁠the EU to intervene in the WhatsApp Business API. There are many AI options and people can use them from app stores, operating systems, devices, websites, and ⁠industry partnerships," a Meta spokesperson said in an email.

"The Commission's logic incorrectly assumes the WhatsApp Business API is a key distribution channel for these chatbots."


Chinese Robot Makers Ready for Lunar New Year Entertainment Spotlight

A folk performer breathes fire during a performance ahead of Lunar New Year celebrations in a village in Huai'an, in China's eastern Jiangsu Province on February 7, 2026. (AFP)
A folk performer breathes fire during a performance ahead of Lunar New Year celebrations in a village in Huai'an, in China's eastern Jiangsu Province on February 7, 2026. (AFP)
TT

Chinese Robot Makers Ready for Lunar New Year Entertainment Spotlight

A folk performer breathes fire during a performance ahead of Lunar New Year celebrations in a village in Huai'an, in China's eastern Jiangsu Province on February 7, 2026. (AFP)
A folk performer breathes fire during a performance ahead of Lunar New Year celebrations in a village in Huai'an, in China's eastern Jiangsu Province on February 7, 2026. (AFP)

In China, humanoid robots are serving as Lunar New Year entertainment, with their manufacturers pitching their song-and-dance skills to the general public as well as potential customers, investors and government officials.

On Sunday, Shanghai-based robotics start-up Agibot live-streamed an almost hour-long variety show featuring its robots dancing, performing acrobatics and magic, lip-syncing ballads and performing in comedy sketches. Other Agibot humanoid robots waved from an audience section.

An estimated 1.4 million people watched on the Chinese streaming platform Douyin. Agibot, which called the promotional stunt "the world's first robot-powered gala," did not have an immediate estimate for total viewership.

The ‌show ran a ‌week ahead of China's annual Spring Festival gala ‌to ⁠be aired ‌by state television, an event that has become an important - if unlikely - venue for Chinese robot makers to show off their success.

A squad of 16 full-size humanoids from Unitree joined human dancers in performing at China Central Television's 2025 gala, drawing stunned accolades from millions of viewers.

Less than three weeks later, Unitree's founder was invited to a high-profile symposium chaired by Chinese President Xi Jinping. The Hangzhou-based robotics ⁠firm has since been preparing for a potential initial public offering.

This year's CCTV gala will include ‌participation by four humanoid robot startups, Unitree, Galbot, Noetix ‍and MagicLab, the companies and broadcaster ‍have said.

Agibot's gala employed over 200 robots. It was streamed on social ‍media platforms RedNote, Sina Weibo, TikTok and its Chinese version Douyin. Chinese-language television networks HTTV and iCiTi TV also broadcast the performance.

"When robots begin to understand Lunar New Year and begin to have a sense of humor, the human-computer interaction may come faster than we think," Ma Hongyun, a photographer and writer with 4.8 million followers on Weibo, said in a post.

Agibot, which says ⁠its humanoid robots are designed for a range of applications, including in education, entertainment and factories, plans to launch an initial public offering in Hong Kong, Reuters has reported.

State-run Securities Times said Agibot had opted out of the CCTV gala in order to focus spending on research and development. The company did not respond to a request for comment.

The company demonstrated two of its robots to Xi during a visit in April last year.

US billionaire Elon Musk, who has pivoted automaker Tesla toward a focus on artificial intelligence and the Optimus humanoid robot, has said the only competitive threat he faces in robotics is from Chinese firms.


AI to Track Icebergs Adrift at Sea in Boon for Science

© Jonathan NACKSTRAND / AFP
© Jonathan NACKSTRAND / AFP
TT

AI to Track Icebergs Adrift at Sea in Boon for Science

© Jonathan NACKSTRAND / AFP
© Jonathan NACKSTRAND / AFP

British scientists said Thursday that a world-first AI tool to catalogue and track icebergs as they break apart into smaller chunks could fill a "major blind spot" in predicting climate change.

Icebergs release enormous volumes of freshwater when they melt on the open water, affecting global climate patterns and altering ocean currents and ecosystems, reported AFP.

But scientists have long struggled to keep track of these floating behemoths once they break into thousands of smaller chunks, their fate and impact on the climate largely lost to the seas.

To fill in the gap, the British Antarctic Survey has developed an AI system that automatically identifies and names individual icebergs at birth and tracks their sometimes decades-long journey to a watery grave.

Using satellite images, the tool captures the distinct shape of icebergs as they break off -- or calve -- from glaciers and ice sheets on land.

As they disintegrate over time, the machine performs a giant puzzle problem, linking the smaller "child" fragments back to the "parent" and creating detailed family trees never before possible at this scale.

It represents a huge improvement on existing methods, where scientists pore over satellite images to visually identify and track only the largest icebergs one by one.

The AI system, which was tested using satellite observations over Greenland, provides "vital new information" for scientists and improves predictions about the future climate, said the British Antarctic Survey.

Knowing where these giant slabs of freshwater were melting into the ocean was especially crucial with ice loss expected to increase in a warming world, it added.

"What's exciting is that this finally gives us the observations we've been missing," Ben Evans, a machine learning expert at the British Antarctic Survey, said in a statement.

"We've gone from tracking a few famous icebergs to building full family trees. For the first time, we can see where each fragment came from, where it goes and why that matters for the climate."

This use of AI could also be adapted to aid safe passage for navigators through treacherous polar regions littered by icebergs.

Iceberg calving is a natural process. But scientists say the rate at which they were being lost from Antarctica is increasing, probably because of human-induced climate change.