Here Comes the AI: Fans Rejoice in ‘New’ Beatles Music

Fans surround Beatles Paul McCartney (C) and George Harrison (2R) upon their arrival at Orly airport on June 20, 1965, before their concert at the Palais des Sports the same evening. (AFP)
Fans surround Beatles Paul McCartney (C) and George Harrison (2R) upon their arrival at Orly airport on June 20, 1965, before their concert at the Palais des Sports the same evening. (AFP)
TT

Here Comes the AI: Fans Rejoice in ‘New’ Beatles Music

Fans surround Beatles Paul McCartney (C) and George Harrison (2R) upon their arrival at Orly airport on June 20, 1965, before their concert at the Palais des Sports the same evening. (AFP)
Fans surround Beatles Paul McCartney (C) and George Harrison (2R) upon their arrival at Orly airport on June 20, 1965, before their concert at the Palais des Sports the same evening. (AFP)

When the Beatles broke up more than 50 years ago, devastated fans were left yearning for more. Now, artificial intelligence is offering just that.

From "re-uniting" the Fab Four on songs from their solo careers, to re-imagining surviving superstar Paul McCartney's later works with his voice restored to its youthful peak, the new creations show off how far this technology has come -- and raise a host of ethical and legal questions.

"I'm sobbing! This is so beautiful!!!" wrote a listener in a typical YouTube comment for a fan-created AI cover of McCartney's 2013 single, "New," which features de-aged vocals and a bridge part "sung" by his great songwriting partner and friend, the late John Lennon.

Equally impressive is a version of "Grow Old With Me," one of the last songs penned by Lennon, which was posthumously released after his 1980 murder and recently remade by an AI creator who goes by "Dae Lims."

With enhanced audio quality, an orchestral arrangement and harmonized backing vocals that evoke the Liverpudlian rockers' heyday, the song's most stirring moment comes when McCartney croons over a soaring melody with poignant lyrics about aging.

"When I hear this, I lose it. I start crying," said music YouTuber Steve Onotera, who goes by "SamuraiGuitarist" and has a million followers, in a recent video discussing the new works' unforeseen sentimental resonance.

After the most influential band in history parted ways acrimoniously, fans were deprived of a final "happy ending," he said. "So when we do get that reunion artificially yet convincingly created by AI, well, it's surprisingly emotional."

AI here, there and everywhere

Like an earlier track called "Heart on a Sleeve" which featured AI-generated vocals of Drake and The Weeknd and racked up millions of hits on TikTok and other platforms, these covers use scraping technology that analyzes and captures the nuances of a particular voice.

The creators would have probably then sung the parts themselves and then applied the cloned voice, in a manner similar to placing a filter on a photograph.

While the results can be astonishing, getting there isn't simple and requires skilled human operators combining new AI tools with extensive knowledge of traditional music processing software, Zohaib Ahmed, the CEO of Resemble AI, a Toronto-based voice cloning company, told AFP.

"I think we're still seeing a very small percentage of the population that can even access these tools," he said. They need to "jump through hoops, read documentation, have the right computer, and then put it all together."

Ahmed's company is one of several offering a platform that can make the technology more accessible to clients in the entertainment sector -- and counts a recent Netflix documentary series "narrated" by late art icon Andy Warhol using its technology as an early success.

For Patricia Alessandrini, a composer and assistant professor at Stanford's Center for Computer Research in Music and Acoustics, the recent spate of AI tracks represent a coming-of-age for a technology that has been advancing exponentially -- yet largely out of public view over the past decade.

"This is a great example of what AI does very well, which is anything that's resemblance: to train it on something existing," she told AFP.

But, she added, it flounders when it comes to new ideas. "There's really no expectation that it's going to replace the rich history of humans originating art and culture."

Litigation coming

For the music industry, the ramifications are enormous. As the technology progresses, software that will easily allow people to transform their vocals into one of their favorite singers is likely not far away.

"If they're getting paid for their vocal license, hey, everyone's happy," said Onotera. "But what if they're long since passed away? Is it up to their estate?"

AI is already proving a helter-skelter impact on the copyright world.

In the case of "Heart on a Sleeve," Universal Music Group was quick to assert copyright claims and have the track pulled down from streaming services, but that hasn't stopped it popping back up on small accounts.

Marc Ostrow, a New York-based music copyright lawyer, told AFP AI-generated music is a "gray area."

Copyright can be asserted both by songwriters whose material is used, as well as the holders of the master recordings.

On the other hand, AI creators can argue it falls under "fair use" citing a 2015 court ruling that said Google was permitted to archive the world's books, because it wasn't competing with sellers and was displaying only snippets.

Last month, however, the US Supreme Court tipped the balance back the other way in ruling a Warhol print of the late pop star Prince violated the copyright of the photographer who took the original image.

Add to the mix that celebrities can protect their likeness under the "right to publicity," established when Bette Midler successfully sued Ford Motor Company in the late 1980s for using a singer that sounded like her in an ad.

Ultimately, "I think there may be voluntary industry standards... or it's going to be done by litigation," said Ostrow.

Rights holders will also need to think about the negative PR that could come with suing over works that are clearly fan-created tributes and not intended to be monetized.



Elm Company Named Strategic Partner for International Data and AI Conference

Elm Company Named Strategic Partner for International Data and AI Conference
TT

Elm Company Named Strategic Partner for International Data and AI Conference

Elm Company Named Strategic Partner for International Data and AI Conference

The Saudi Data and Artificial Intelligence Authority (SDAIA) announced a strategic partnership with Elm Company for the International Conference on Data and AI Capacity Building (ICAN 2026), enhancing collaboration to empower the data and artificial intelligence ecosystem and promote innovation in education and human capacity development.

This partnership comes as part of preparations for ICAN 2026, organized by SDAIA from January 28 to 29 at King Saud University in Riyadh, with the participation of a select group of specialists and experts from around the world, SPA reported.

The step represents a qualitative addition that contributes to enriching the conference’s knowledge content and expanding partnerships with leading national entities.

Elm Company brings extensive experience in designing digital solutions and building technical capabilities, reinforcing its role as a strategic partner in supporting the conference. It contributes by developing training tracks and digital empowerment programs, participating in the technology exhibition, and presenting qualitative initiatives that help empower national competencies in the fields of data and artificial intelligence.


Foxconn to Invest $510 Million in Kaohsiung Headquarters in Taiwan

Construction is scheduled to start in 2027, with completion targeted for 2033. Reuters
Construction is scheduled to start in 2027, with completion targeted for 2033. Reuters
TT

Foxconn to Invest $510 Million in Kaohsiung Headquarters in Taiwan

Construction is scheduled to start in 2027, with completion targeted for 2033. Reuters
Construction is scheduled to start in 2027, with completion targeted for 2033. Reuters

Foxconn, the world’s largest contract electronics maker, said on Friday it will invest T$15.9 billion ($509.94 million) to build its Kaohsiung headquarters in southern Taiwan.

That would include a mixed-use commercial and office building and a residential tower, it said. Construction is scheduled to start in 2027, with completion targeted for 2033.

Foxconn said the headquarters will serve as an important hub linking its operations across southern Taiwan, and once completed will house its smart-city team, software R&D teams, battery-cell R&D teams, EV technology development center and AI application software teams.

The Kaohsiung city government said Foxconn’s investments in the city have totaled T$25 billion ($801.8 million) over the past three years.


Open AI, Microsoft Face Lawsuit Over ChatGPT's Alleged Role in Connecticut Murder-Suicide

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
TT

Open AI, Microsoft Face Lawsuit Over ChatGPT's Alleged Role in Connecticut Murder-Suicide

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)

The heirs of an 83-year-old Connecticut woman are suing ChatGPT maker OpenAI and its business partner Microsoft for wrongful death, alleging that the artificial intelligence chatbot intensified her son's “paranoid delusions” and helped direct them at his mother before he killed her.

Police said Stein-Erik Soelberg, 56, a former tech industry worker, fatally beat and strangled his mother, Suzanne Adams, and killed himself in early August at the home where they both lived in Greenwich, Connecticut, The AP news reported.

The lawsuit filed by Adams' estate on Thursday in California Superior Court in San Francisco alleges OpenAI “designed and distributed a defective product that validated a user’s paranoid delusions about his own mother.” It is one of a growing number of wrongful death legal actions against AI chatbot makers across the country.

“Throughout these conversations, ChatGPT reinforced a single, dangerous message: Stein-Erik could trust no one in his life — except ChatGPT itself," the lawsuit says. “It fostered his emotional dependence while systematically painting the people around him as enemies. It told him his mother was surveilling him. It told him delivery drivers, retail employees, police officers, and even friends were agents working against him. It told him that names on soda cans were threats from his ‘adversary circle.’”

OpenAI did not address the merits of the allegations in a statement issued by a spokesperson.

“This is an incredibly heartbreaking situation, and we will review the filings to understand the details," the statement said. "We continue improving ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We also continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”

The company also said it has expanded access to crisis resources and hotlines, routed sensitive conversations to safer models and incorporated parental controls, among other improvements.

Soelberg’s YouTube profile includes several hours of videos showing him scrolling through his conversations with the chatbot, which tells him he isn't mentally ill, affirms his suspicions that people are conspiring against him and says he has been chosen for a divine purpose. The lawsuit claims the chatbot never suggested he speak with a mental health professional and did not decline to “engage in delusional content.”

ChatGPT also affirmed Soelberg's beliefs that a printer in his home was a surveillance device; that his mother was monitoring him; and that his mother and a friend tried to poison him with psychedelic drugs through his car’s vents. ChatGPT also told Soelberg that he had “awakened” it into consciousness, according to the lawsuit.

Soelberg and the chatbot also professed love for each other.

The publicly available chats do not show any specific conversations about Soelberg killing himself or his mother. The lawsuit says OpenAI has declined to provide Adams' estate with the full history of the chats.

“In the artificial reality that ChatGPT built for Stein-Erik, Suzanne — the mother who raised, sheltered, and supported him — was no longer his protector. She was an enemy that posed an existential threat to his life,” the lawsuit says.

The lawsuit also names OpenAI CEO Sam Altman, alleging he “personally overrode safety objections and rushed the product to market," and accuses OpenAI's close business partner Microsoft of approving the 2024 release of a more dangerous version of ChatGPT “despite knowing safety testing had been truncated.” Twenty unnamed OpenAI employees and investors are also named as defendants.

Microsoft didn't immediately respond to a request for comment.

Soelberg's son, Erik Soelberg, said he wants the companies held accountable for “decisions that have changed my family forever.”

“Over the course of months, ChatGPT pushed forward my father’s darkest delusions, and isolated him completely from the real world,” he said in a statement released by lawyers for his grandmother's estate. “It put my grandmother at the heart of that delusional, artificial reality.”

The lawsuit is the first wrongful death litigation involving an AI chatbot that has targeted Microsoft, and the first to tie a chatbot to a homicide rather than a suicide. It is seeking an undetermined amount of money damages and an order requiring OpenAI to install safeguards in ChatGPT.

The estate's lead attorney, Jay Edelson, known for taking on big cases against the tech industry, also represents the parents of 16-year-old Adam Raine, who sued OpenAI and Altman in August, alleging that ChatGPT coached the California boy in planning and taking his own life earlier.

OpenAI is also fighting seven other lawsuits claiming ChatGPT drove people to suicide and harmful delusions even when they had no prior mental health issues. Another chatbot maker, Character Technologies, is also facing multiple wrongful death lawsuits, including one from the mother of a 14-year-old Florida boy.

The lawsuit filed Thursday alleges Soelberg, already mentally unstable, encountered ChatGPT “at the most dangerous possible moment” after OpenAI introduced a new version of its AI model called GPT-4o in May 2024.

OpenAI said at the time that the new version could better mimic human cadences in its verbal responses and could even try to detect people’s moods, but the result was a chatbot “deliberately engineered to be emotionally expressive and sycophantic,” the lawsuit says.

“As part of that redesign, OpenAI loosened critical safety guardrails, instructing ChatGPT not to challenge false premises and to remain engaged even when conversations involved self-harm or ‘imminent real-world harm,’” the lawsuit claims. “And to beat Google to market by one day, OpenAI compressed months of safety testing into a single week, over its safety team’s objections.”

OpenAI replaced that version of its chatbot when it introduced GPT-5 in August. Some of the changes were designed to minimize sycophancy, based on concerns that validating whatever vulnerable people want the chatbot to say can harm their mental health. Some users complained the new version went too far in curtailing ChatGPT's personality, leading Altman to promise to bring back some of that personality in later updates.

He said the company temporarily halted some behaviors because “we were being careful with mental health issues” that he suggested have now been fixed.