The Success of AI Music Creators Sparks Debate on Future of Music Industry

This photo provided by Hallwood shows British AI music creator Oliver McCann, on Aug. 7, 2025, in West Hollywood, Calif. (Hallwood via AP)
This photo provided by Hallwood shows British AI music creator Oliver McCann, on Aug. 7, 2025, in West Hollywood, Calif. (Hallwood via AP)
TT

The Success of AI Music Creators Sparks Debate on Future of Music Industry

This photo provided by Hallwood shows British AI music creator Oliver McCann, on Aug. 7, 2025, in West Hollywood, Calif. (Hallwood via AP)
This photo provided by Hallwood shows British AI music creator Oliver McCann, on Aug. 7, 2025, in West Hollywood, Calif. (Hallwood via AP)

When pop groups and rock bands practice or perform, they rely on their guitars, keyboards and drumsticks to make music. Oliver McCann, a British AI music creator who goes by the stage name imoliver, fires up his chatbot.

McCann's songs span a range of genres, from indie-pop to electro-soul to country-rap. There’s just one crucial difference between McCann and traditional musicians.

"I have no musical talent at all," he said. "I can’t sing, I can’t play instruments, and I have no musical background at all."

McCann, 37, who has a background as a visual designer, started experimenting with AI to see if it could boost his creativity and "bring some of my lyrics to life." Last month, he signed with independent record label Hallwood Media after one of his tracks racked up 3 million streams, in what's billed as the first time a music label has inked a contract with an AI music creator.

McCann is an example of how ChatGPT-style AI song generation tools like Suno and Udio have spawned a wave of synthetic music. A movement most notably highlighted by a fictitious group, Velvet Sundown, that went viral even though all its songs, lyrics and album art were created by AI.

It fueled debate about AI's role in music while raising fears about "AI slop" — automatically generated low quality mass produced content. It also cast a spotlight on AI song generators that are democratizing song making but threaten to disrupt the music industry.

Experts say generative AI is set to transform the music world. However, there are scant details, so far, on how it's impacting the $29.6 billion global recorded music market, which includes about $20 billion from streaming.

The most reliable figures come from music streaming service Deezer, which estimates that 18% of songs uploaded to its platform every day are purely AI generated, though they only account for a tiny amount of total streams, hinting that few people are actually listening. Other, bigger streaming platforms like Spotify haven't released any figures on AI music.

Udio declined to comment on how many users it has and how many songs it has generated. Suno did not respond to a request for comment. Both have free basic levels as well as pro and premium tiers that come with access to more advanced AI models.

"It’s a total boom. It’s a tsunami," said Josh Antonuccio, director of Ohio University's School of Media Arts and Studies. The amount of AI generated music "is just going to only exponentially increase" as young people grow up with AI and become more comfortable with it, he said.

Yet generative AI, with its ability to spit out seemingly unique content, has divided the music world, with musicians and industry groups complaining that recorded works are being exploited to train AI models that power song generation tools.

Record labels are trying to fend off the threat that AI music startups pose to their revenue streams even as they hope to tap into it for new earnings, while recording artists worry that it will devalue their creativity.

Three major record companies, Sony Music Entertainment, Universal Music Group and Warner Records, filed lawsuits last year against Suno and Udio for copyright infringement. In June, the two sides also reportedly entered negotiations that could go beyond settling the lawsuits and set rules for how artists are paid when AI is used to remix their songs.

GEMA, a German royalty collection society, has sued Suno, accusing it of generating music similar to songs like "Mambo No. 5" by Lou Bega and "Forever Young" by Alphaville.

More than 1,000 musicians, including Kate Bush, Annie Lennox and Damon Albarn, released a silent album to protest proposed changes to UK laws on AI they fear would erode their creative control. Meanwhile, other artists, such as will.i.am, Timbaland and Imogen Heap, have embraced the technology.

Some users say the debate is just a rehash of old arguments about once-new technology that eventually became widely used, such as AutoTune, drum machines and synthesizers.

People complain "that you’re using a computer to do all the work for you. I don’t see it that way. I see it as any other tool that we have," said Scott Smith, whose AI band, Pulse Empire, was inspired by 1980s British synthesizer-driven groups like New Order and Depeche Mode.

Smith, 56 and a semi-retired former US Navy public affairs officer in Portland, Oregon, said "music producers have lots of tools in their arsenal" to enhance recordings that listeners aren't aware of.

Like McCann, Smith never mastered a musical instrument. Both say they put lots of time and effort into crafting their music.

Once Smith gets inspiration, it takes him just 10 minutes to write the lyrics. But then he'll spend as much as eight to nine hours generating different versions until the song "matches my vision."

McCann said he'll often create up to 100 different versions of a song by prompting and re-prompting the AI system before he’s satisfied.

AI song generators can churn out lyrics as well as music, but many experienced users prefer to write their own words.

"AI lyrics tend to come out quite cliche and quite boring," McCann said.

Lukas Rams, a Philadelphia-area resident who makes songs for his AI band Sleeping With Wolves, said AI lyrics tend to be "extra corny" and not as creative as a human, but can help get the writing process started.

"It’ll do very basic rhyme schemes, and it’ll keep repeating the same structure," said Rams, who writes his own words, sometimes while putting his kids to bed and waiting for them to fall asleep. "And then you’ll get words in there that are very telling of AI-generated lyrics, like ‘neon,’ anything with ‘shadows’."

Rams used to play drums in high school bands and collaborated with his brother on their own songs, but work and family life started taking up more of his time.

Then he discovered AI, which he used to create three albums for Sleeping With Wolves. He's been taking it seriously, making a CD jewel case with album art. He plans to post his songs, which combine metalcore and EDM, more widely online.

"I do want to start putting this up on YouTube or socials or distribution or whatever, just to have it out there," Rams said. "I might as well, otherwise I’m literally the only person that hears this stuff."

Experts say AI's potential to let anyone come up with a hit song is poised to shake up the music industry's production pipeline.

"Just think about what it used to cost to make a hit or make something that breaks," Antonuccio said. "And that just keeps winnowing down from a major studio to a laptop to a bedroom. And now it’s like a text prompt — several text prompts."

But he added that AI music is still in a "Wild West" phase because of the lack of legal clarity over copyright. He compared it to the legal battles more than two decades ago over file-sharing sites like Napster that heralded the transition from CDs to digital media and eventually paved the way for today's music streaming services.

Creators hope AI, too, will eventually become a part of the mainstream music world.

"I think we’re entering a world where anyone, anywhere could make the next big hit," said McCann. "As AI becomes more widely accepted among people as a musical art form, I think it opens up the possibility for AI music to be featured in charts."



Social Media ‘Addicting the Brains of Children,’ Plaintiff’s Lawyer Argues in Landmark Trial

Teenagers pose for a photo while holding smartphones in front of a Meta logo in this illustration taken September 11, 2025. (Reuters)
Teenagers pose for a photo while holding smartphones in front of a Meta logo in this illustration taken September 11, 2025. (Reuters)
TT

Social Media ‘Addicting the Brains of Children,’ Plaintiff’s Lawyer Argues in Landmark Trial

Teenagers pose for a photo while holding smartphones in front of a Meta logo in this illustration taken September 11, 2025. (Reuters)
Teenagers pose for a photo while holding smartphones in front of a Meta logo in this illustration taken September 11, 2025. (Reuters)

Comparing social media platforms to casinos and addictive drugs, lawyer Mark Lanier delivered opening statements Monday in a landmark trial in Los Angeles that seeks to hold Instagram owner Meta and Google's YouTube responsible for harms to children who use their products.

Instagram's parent company Meta and Google's YouTube face claims that their platforms addict children through deliberate design choices that keep kids glued to their screens. TikTok and Snap, which were originally named in the lawsuit, settled for undisclosed sums.

Jurors got their first glimpse into what will be a lengthy trial characterized by dueling narratives from the plaintiffs and the two remaining defendants.

Meta lawyer Paul Schmidt spoke of the disagreement within the scientific community over social media addiction, with some researchers believing it doesn’t exist, or that addiction is not the most appropriate way to describe heavy social media use.

‘Addicting the brains of children’

Lanier, the plaintiff's lawyer, delivered lively first remarks where he said the case will be as “easy as ABC” — which stands for “addicting the brains of children.” He said Meta and Google, “two of the richest corporations in history,” have “engineered addiction in children’s brains.”

He presented jurors with a slew of internal emails, documents and studies conducted by Meta and YouTube, as well as YouTube’s parent company, Google. He emphasized the findings of a study Meta conducted called “Project Myst” in which they surveyed 1,000 teens and their parents about their social media use.

The two major findings, Lanier said, were that Meta knew children who experienced “adverse events” like trauma and stress were particularly vulnerable for addiction; and that parental supervision and controls made little impact.

He also highlighted internal Google documents that likened some company products to a casino, and internal communication between Meta employees in which one person said Instagram is “like a drug” and they are “basically pushers.”

At the core of the Los Angeles case is a 20-year-old identified only by the initials “KGM,” whose case could determine how thousands of other, similar lawsuits against social media companies will play out. She and two other plaintiffs have been selected for bellwether trials — essentially test cases for both sides to see how their arguments play out before a jury.

Plaintiff grew up using YouTube, Instagram

KGM made a brief appearance after a break during Lanier’s statement and she will return to testify later in the trial. Lanier spent time describing KGM's childhood, focusing particularly on what her personality was like before she began using social media.

She started using YouTube at age 6 and Instagram at age 9, Lanier said. Before she graduated elementary school, she had posted 284 videos on YouTube.

The outcome of the trial could have profound effects on the companies' businesses and how they will handle children using their platforms.

Lanier said the companies’ lawyers will “try to blame the little girl and her parents for the trap they built,” referencing the plaintiff. She was a minor when she said she became addicted to social media, which she claims had a detrimental impact on her mental health.

Lanier said that despite the public position of Meta and YouTube being that they work to protect children, their internal documents show an entirely different position, with explicit references to young children being listed as their target audiences.

The attorney also drew comparisons between the social media companies and tobacco firms, citing internal communication between Meta employees who were concerned about the company’s lack of proactive action about the potential harm their platforms can have on children and teens.

“For a teenager, social validation is survival,” Lanier said. The defendants “engineered a feature that caters to a minor’s craving for social validation,” he added, speaking about “like” buttons and similar features.

Meta pushes back

In his opening statement representing Meta, Schmidt said the core question in the case is whether the platforms were a substantial factor in KGM’s mental health struggles. He spent much of his time going through the plaintiff’s health records, emphasizing that she had experienced many difficult circumstances in her childhood, including emotional abuse, body image issues and bullying.

Schmidt presented a clip from a video deposition from one of KGM‘s mental health providers, Dr. Thomas Suberman, who said social media was “not the through-line of what I recall being her main issues,” adding that her struggles seemed to largely stem from interpersonal conflicts and relationships.

He painted a picture — with KGM’s own text messages and testimony pointing to a volatile home life — of a particularly troubled relationship with her mother.

Schmidt acknowledged that many mental health professionals do believe social media addiction can exist, but said three of KGM’s providers — all of whom believe in the form of addiction — have never diagnosed her with it, or treated her for it.

Schmidt stressed to the jurors that the case is not about whether social media is a good thing or whether teens spend too much time on their phones or whether the jurors like or dislike Meta, but whether social media was a substantial factor in KGM’s mental health struggles.

A reckoning for social media and youth harms

A slew of trials beginning this year seek to hold social media companies responsible for harming children's mental well-being. Executives, including Meta CEO Mark Zuckerberg, are expected to testify at the Los Angeles trial, which will last six to eight weeks.

Experts have drawn similarities to the Big Tobacco trials that led to a 1998 settlement requiring cigarette companies to pay billions in health care costs and restrict marketing targeting minors.

A separate trial in New Mexico, meanwhile, also kicked off with opening statements on Monday. In that trial, Meta is accused of failing to protect young users from sexual exploitation, following an undercover online investigation. Attorney General Raúl Torrez in late 2023 sued Meta and Zuckerberg, who was later dropped from the suit.

A federal bellwether trial beginning in June in Oakland, California, will be the first to represent school districts that have sued social media platforms over harms to children.

In addition, more than 40 state attorneys general have filed lawsuits against Meta, claiming it is harming young people and contributing to the youth mental health crisis by deliberately designing features on Instagram and Facebook that addict children to its platforms. The majority of cases filed their lawsuits in federal court, but some sued in their respective states.

TikTok also faces similar lawsuits in more than a dozen states.


AI No Better Than Other Methods for Patients Seeking Medical Advice, Study Shows

AI (Artificial Intelligence) letters and a robot hand are placed on a computer motherboard in this illustration created on June 23, 2023. (Reuters)
AI (Artificial Intelligence) letters and a robot hand are placed on a computer motherboard in this illustration created on June 23, 2023. (Reuters)
TT

AI No Better Than Other Methods for Patients Seeking Medical Advice, Study Shows

AI (Artificial Intelligence) letters and a robot hand are placed on a computer motherboard in this illustration created on June 23, 2023. (Reuters)
AI (Artificial Intelligence) letters and a robot hand are placed on a computer motherboard in this illustration created on June 23, 2023. (Reuters)

Asking AI about medical symptoms does not help patients make better decisions about their health than other methods, such as a standard internet search, according to a new study published in Nature Medicine.

The authors said the study was important as people were increasingly turning to AI and chatbots for advice on their health, but without evidence that this was necessarily the best and safest approach.

Researchers led by the University of Oxford’s Internet Institute worked alongside a group of doctors to draw up 10 different medical scenarios, ranging from a common cold to a life-threatening hemorrhage causing bleeding on the brain.

When tested without human participants, three large-language models – Open AI's Chat GPT-4o, ‌Meta's Llama ‌3 and Cohere's Command R+ – identified the conditions in ‌94.9% ⁠of cases, ‌and chose the correct course of action, like calling an ambulance or going to the doctor, in an average of 56.3% of cases. The companies did not respond to requests for comment.

'HUGE GAP' BETWEEN AI'S POTENTIAL AND ACTUAL PERFORMANCE

The researchers then recruited 1,298 participants in Britain to either use AI, or their usual resources like an internet search, or their experience, or the National Health Service website to ⁠investigate the symptoms and decide their next step.

When the participants did this, relevant conditions were identified in ‌less than 34.5% of cases, and the right ‍course of action was given in ‍less than 44.2%, no better than the control group using more traditional ‍tools.

Adam Mahdi, co-author of the paper and associate professor at Oxford, said the study showed the “huge gap” between the potential of AI and the pitfalls when it was used by people.

“The knowledge may be in those bots; however, this knowledge doesn’t always translate when interacting with humans,” he said, meaning that more work was needed to identify why this was happening.

HUMANS OFTEN GIVING INCOMPLETE INFORMATION

The ⁠team studied around 30 of the interactions in detail, and concluded that often humans were providing incomplete or wrong information, but the LLMs were also sometimes generating misleading or incorrect responses.

For example, one patient reporting the symptoms of a subarachnoid hemorrhage – a life-threatening condition causing bleeding on the brain – was correctly told by AI to go to hospital after describing a stiff neck, light sensitivity and the "worst headache ever". The other described the same symptoms but a "terrible" headache, and was told to lie down in a darkened room.

The team now plans a similar study in different countries and languages, and over time, to test if that impacts AI’s performance.

The ‌study was supported by the data company Prolific, the German non-profit Dieter Schwarz Stiftung, and the UK and US governments.


Meta Criticizes EU Antitrust Move Against WhatsApp Block on AI Rivals

(FILES) This illustration photograph taken on December 1, 2025, shows the logo of WhatsApp displayed on a smartphone's screen, in Frankfurt am Main, western Germany. (Photo by Kirill KUDRYAVTSEV / AFP)
(FILES) This illustration photograph taken on December 1, 2025, shows the logo of WhatsApp displayed on a smartphone's screen, in Frankfurt am Main, western Germany. (Photo by Kirill KUDRYAVTSEV / AFP)
TT

Meta Criticizes EU Antitrust Move Against WhatsApp Block on AI Rivals

(FILES) This illustration photograph taken on December 1, 2025, shows the logo of WhatsApp displayed on a smartphone's screen, in Frankfurt am Main, western Germany. (Photo by Kirill KUDRYAVTSEV / AFP)
(FILES) This illustration photograph taken on December 1, 2025, shows the logo of WhatsApp displayed on a smartphone's screen, in Frankfurt am Main, western Germany. (Photo by Kirill KUDRYAVTSEV / AFP)

Meta Platforms on Monday criticized EU regulators after they charged the US tech giant with breaching antitrust rules and threaten to halt its block on ⁠AI rivals on its messaging service WhatsApp.

"The facts are that there is no reason for ⁠the EU to intervene in the WhatsApp Business API. There are many AI options and people can use them from app stores, operating systems, devices, websites, and ⁠industry partnerships," a Meta spokesperson said in an email.

"The Commission's logic incorrectly assumes the WhatsApp Business API is a key distribution channel for these chatbots."