Tesla Recalls Most Cybertrucks due to Trim Detaching from Vehicle

A person sits inside a Tesla Cybertruck at a Tesla showroom in New York City, US, January 2, 2025. REUTERS/Adam Gray/File Photo
A person sits inside a Tesla Cybertruck at a Tesla showroom in New York City, US, January 2, 2025. REUTERS/Adam Gray/File Photo
TT
20

Tesla Recalls Most Cybertrucks due to Trim Detaching from Vehicle

A person sits inside a Tesla Cybertruck at a Tesla showroom in New York City, US, January 2, 2025. REUTERS/Adam Gray/File Photo
A person sits inside a Tesla Cybertruck at a Tesla showroom in New York City, US, January 2, 2025. REUTERS/Adam Gray/File Photo

Tesla said Thursday it is recalling nearly all Cybertrucks in the United States to fix an exterior panel that could detach while driving, the latest in a series of call-backs for the pickup truck.

The recall covers just over 46,000 vehicles built from November 2023 through Feb. 27 of this year, Tesla said in a filing with the National Highway Traffic Safety Administration.

The recall could prove to be a setback for Tesla, whose stock has lost about half its value this year as the EV automaker grapples with rising competition, an aging lineup, and backlash against CEO Elon Musk's controversial role overseeing cuts to federal spending in the Trump White House, Reuters reported.

The recall addresses risks a stainless-steel exterior trim panel can detach from the vehicle, making it a road hazard boosting the risk of a crash, Tesla said. Tesla's service will replace the rail panel assembly with a new one that meets durability testing requirements, the automaker said.

On Feb. 21, NHTSA notified Tesla of a vehicle owner that alleged a rail panel detachment.

Tesla said a detached rail panel may create a detectable noise inside the cabin or customers may observe the panel coming loose or separating from the vehicle.

Tesla said it is aware of 151 warranty claims that may be related to the recall issue, but no collisions or injuries.

While Tesla does not break out deliveries of its Cybertrucks, the recalled vehicles represent a vast majority of the Cybertruck vehicles on the road, based on analyst estimates.

Demand for the unconventional EV pickup has already shown signs of weakness toward the end of last year, following several delays.

Shares of the EV maker fell 1.4% in premarket trading.

Tesla shares, initially boosted post-election due to Musk's relationship with US President Donald Trump, have fallen nearly 42% this year.

Analysts have pointed to a change in sentiment toward the EV maker from existing customers and potential new buyers, as reactions toward the brand such as protests at Tesla stores across the US and sales boycotts emerge.

Tesla accounts for a large portion of recalled vehicles in the US. In 2024, Tesla topped the list for US recalls with its vehicles accounting for 5.1 million call-backs, according to recall management firm BizzyCar. However, most issues for the brand's cars were usually resolved with over-the-air software updates.

)



Firms and Researchers at Odds over Superhuman AI

Three-quarters of respondents to a survey by the US-based Association for the Advancement of Artificial Intelligence agreed that 'scaling up' LLMs was unlikely to produce artificial general intelligence. Joe Klamar / AFP/File
Three-quarters of respondents to a survey by the US-based Association for the Advancement of Artificial Intelligence agreed that 'scaling up' LLMs was unlikely to produce artificial general intelligence. Joe Klamar / AFP/File
TT
20

Firms and Researchers at Odds over Superhuman AI

Three-quarters of respondents to a survey by the US-based Association for the Advancement of Artificial Intelligence agreed that 'scaling up' LLMs was unlikely to produce artificial general intelligence. Joe Klamar / AFP/File
Three-quarters of respondents to a survey by the US-based Association for the Advancement of Artificial Intelligence agreed that 'scaling up' LLMs was unlikely to produce artificial general intelligence. Joe Klamar / AFP/File

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction, AFP said.

"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".

Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.

Others, though are more skeptical.

Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.

LeCun's view appears backed by a majority of academics in the field.

Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.

'Genie out of the bottle'

Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.

Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.

"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."

Skepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.

"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.

A similar, more recent thought experiment is the "paperclip maximiser".

This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.

While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.

Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.

He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.

'Biggest thing ever'

The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility program at Britain's Cambridge University.

"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.

Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.

"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".

The challenge can lie in communicating these ideas to politicians and the public.

Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.