Australia Ban Offers Test on Social Media Harm

This photo taken on October 24, 2025 shows a 14-year-old boy posing at his home near Gosford as he looks at social media on his mobile phone. (AFP)
This photo taken on October 24, 2025 shows a 14-year-old boy posing at his home near Gosford as he looks at social media on his mobile phone. (AFP)
TT

Australia Ban Offers Test on Social Media Harm

This photo taken on October 24, 2025 shows a 14-year-old boy posing at his home near Gosford as he looks at social media on his mobile phone. (AFP)
This photo taken on October 24, 2025 shows a 14-year-old boy posing at his home near Gosford as he looks at social media on his mobile phone. (AFP)

Australia's under-16 social media ban will make the nation a real-life laboratory on how best to tackle the technology's impact on young people, experts say.

Those in favor of the world-first December 10 ban point to a growing mass of studies that suggest too much time online takes a toll on teen wellbeing.

But opponents argue there is not enough hard proof to warrant the new legislation, which could do more harm than good.

Adolescent brains are still developing into the early 20s, said psychologist Amy Orben, who leads a digital mental health program at the University of Cambridge.

A "huge amount" of observational research, often based on surveys, has tracked a correlation between teen tech use and worse mental health, she told AFP.

But it is hard to draw firm conclusions, because phones are so ingrained into daily life, and young people may turn to social media because they are already suffering.

"With technology, because it's changing so fast, the evidence base will always be uncertain," Orben said.

"What could change the dial are experimental studies or evaluations of natural experiments. So evaluating the Australia ban is hugely important because it actually gives us a window on what might be happening."

To try and shed light on the cause-and-effect relationship, Australian researchers are recruiting 13- to 16-year-olds for a "Connected Minds Study" to assess how the ban affects their wellbeing.

A World Health Organization survey last year found that 11 percent of adolescents struggled to control their use of social media.

Other research has shown a link between excessive social media use and poor sleep, body image, school performance and emotional distress, such as a 2019 study of US schoolchildren in JAMA Psychiatry that found those who spent over three hours a day on social media could be at heightened risk for mental health problems.

So some experts argue the right time to act is now.

"I actually don't think this is a science issue. This is a values issue," said Christian Heim, an Australian psychiatrist and clinical director of mental health.

"We're talking about things like cyberbullying, the risk of suicide, accessing sites on anorexia nervosa and self-harm," he told AFP.

Evidence of a risk is growing, Heim said -- pointing to a 2018 study by neuroscientist Christian Montag that linked addiction to the Chinese messaging app WeChat to shrinking grey matter volume in part of the brain.

"We can't wait for stronger evidence," Heim said.

Scott Griffiths of the Melbourne School of Psychological Sciences said a "smoking gun research study" was unlikely to emerge soon to prove the harms of social media.

But the ban was worth trying, he said.

"I'm hopeful that the major social media companies seeing this full-throated legislative action come into play will finally be motivated to more meaningfully protect the health and wellbeing of young people."

More than three-quarters of Australian adults agreed with the new legislation before it passed, a poll indicated.

However, an open letter signed by more than 140 academics, campaigners and other experts cautioned that a ban would be "too blunt an instrument".

"People were saying: 'Well, kids are getting more anxious. There must be a reason -- let's ban social media'," argued one signatory, Axel Bruns, a digital media professor at Queensland University of Technology.

Children may simply have more reasons to be anxious, under pressure from pandemic-interrupted schooling and troubled by wars in Gaza and Ukraine, he told AFP.

And a ban might push some teens to more extreme, fringe sites, while preventing other marginalized young people from finding community.

Noelle Martin, an activist focused on image-based online abuse and deepfakes, feared the Australian ban would do little to help, given the country's history on enforcement of existing laws.

"I don't believe it will stop, prevent or do much to meaningfully combat this issue," Martin said.

In any case, the political decision has been taken in Australia.

"Social media is doing social harm to our children," Prime Minister Anthony Albanese said this year.

"There is no doubt that Australian kids are being negatively impacted by online platforms, so I'm calling time on it."



Major Publishers Sue Meta for Copyright Infringement Over AI Training

Cars drive past a sign of Meta, the new name for the company formerly known as Facebook, at its headquarters in Menlo Park, California, US, October 28, 2021. (Reuters)
Cars drive past a sign of Meta, the new name for the company formerly known as Facebook, at its headquarters in Menlo Park, California, US, October 28, 2021. (Reuters)
TT

Major Publishers Sue Meta for Copyright Infringement Over AI Training

Cars drive past a sign of Meta, the new name for the company formerly known as Facebook, at its headquarters in Menlo Park, California, US, October 28, 2021. (Reuters)
Cars drive past a sign of Meta, the new name for the company formerly known as Facebook, at its headquarters in Menlo Park, California, US, October 28, 2021. (Reuters)

Publishers Elsevier, Cengage, Hachette, Macmillan and McGraw Hill sued Meta Platforms in Manhattan federal court on Tuesday, alleging that the tech giant misused their books and journal articles to train its artificial intelligence model Llama.

The publishers, as well as author Scott Turow, alleged in the proposed class action complaint that Meta pirated millions of their works and used them without permission to train its large language models to respond to human prompts.

“AI is powering transformative innovations, ‌productivity and creativity ‌for individuals and companies, and courts have rightly ‌found ⁠that training AI ⁠on copyrighted material can qualify as fair use," a Meta spokesperson responded in a statement on Tuesday.

"We will fight this lawsuit aggressively.”

The publishers allege that Meta pirated works ranging from textbooks to scientific articles to novels including "The Fifth Season" by N.K. Jemisin and "The Wild Robot" by Peter Brown for its ⁠AI training.

They asked the court for ‌permission to represent a larger class ‌of copyright owners and an unspecified amount of monetary damages.

"Meta’s mass-scale ‌infringement isn’t public progress, and AI will never be properly ‌realized if tech companies prioritize pirate sites over scholarship and imagination," Maria Pallante, president of the Association of American Publishers, said in a statement.

The lawsuit opens a new front in the ongoing copyright ‌battle between creators and tech companies over AI training, in which dozens of authors, news outlets, ⁠visual ⁠artists and other plaintiffs have sued companies including Meta, OpenAI and Anthropic for infringement.

All of the pending cases will likely revolve around whether AI systems make fair use of copyrighted material by using it to create new, transformative content.

The first two judges to consider the matter issued diverging rulings last year.

Amazon- and Google-backed Anthropic was the first major AI company to settle one of the cases, agreeing last year to pay a group of authors $1.5 billion to resolve a class-action lawsuit that could have cost the company billions more in damages for alleged piracy.


Microsoft, Google and xAI to Give US Govt Early Access to AI Models for Security Checks

A Google logo is seen at a company research facility in Mountain View, California, US, May 13, 2025. (Reuters)
A Google logo is seen at a company research facility in Mountain View, California, US, May 13, 2025. (Reuters)
TT

Microsoft, Google and xAI to Give US Govt Early Access to AI Models for Security Checks

A Google logo is seen at a company research facility in Mountain View, California, US, May 13, 2025. (Reuters)
A Google logo is seen at a company research facility in Mountain View, California, US, May 13, 2025. (Reuters)

Microsoft, Google and Elon Musk’s xAI agreed to give the US government early access to new artificial intelligence models for national security testing, as US officials grow alarmed by the hacking capabilities of Anthropic’s newly unveiled Mythos.

The Center for AI Standards and Innovation at the Department of Commerce said on Tuesday that the agreement would allow it to evaluate the models before deployment and conduct research to assess their capabilities and security risks.

The agreement fulfills a pledge the Trump administration made in July 2025 to partner with technology companies to vet their AI models for “national security risks."

Microsoft will work with ‌US government scientists ‌to test AI systems “in ways that probe unexpected behaviors,” ‌the company ⁠said in a statement. ⁠Together they will develop shared datasets and workflows for testing the company’s models, the company said. Microsoft signed a similar agreement with the UK’s AI Security Institute, according to the statement.

Concern is growing in Washington over the national security risks posed by powerful AI systems. By securing early access to frontier models, US officials are aiming to identify threats ranging from cyberattacks to military misuse before the tools are widely deployed.

The development ⁠of advanced AI systems including Anthropic's Mythos has in recent weeks ‌created a stir globally, including among US officials ‌and corporate America, over their ability to supercharge hackers.

"Independent, rigorous measurement science is essential to understanding ‌frontier AI and its national security implications," CAISI Director Chris Fall said in ‌a statement.

The move builds on previous agreements with OpenAI and Anthropic, established in 2024 under the Biden administration when CAISI was known as the US Artificial Intelligence Safety Institute.

Under former President Joe Biden, the institute focused on developing AI tests, definitions and voluntary safety standards. It ‌was led by Biden tech adviser Elizabeth Kelly, who has since joined Anthropic, according to her LinkedIn profile.

CAISI, which serves ⁠as the government's ⁠main hub for AI model testing, said it had already completed more than 40 evaluations, including on cutting-edge models not yet available to the public.

Developers frequently hand over versions of their models with safety guardrails stripped back so the center can probe for national security risks, the agency said.

xAI did not immediately respond to a request for comment. Google declined to comment.

Last week, the Pentagon said it had reached agreements with seven AI companies to deploy their advanced capabilities on the Defense Department's classified networks as it seeks to broaden the range of AI providers working across the military.

The Pentagon announcement did not include Anthropic, which has been embroiled in a dispute with the Pentagon over guardrails on the military's use of its AI tools.


Samsung Electronics Appoints New TV Chief amid Mounting Competition

FILE PHOTO: The logo of Samsung Electronics is seen at the company's store in Seoul, South Korea, April 15, 2025.   REUTERS/Kim Hong-Ji/File Photo
FILE PHOTO: The logo of Samsung Electronics is seen at the company's store in Seoul, South Korea, April 15, 2025. REUTERS/Kim Hong-Ji/File Photo
TT

Samsung Electronics Appoints New TV Chief amid Mounting Competition

FILE PHOTO: The logo of Samsung Electronics is seen at the company's store in Seoul, South Korea, April 15, 2025.   REUTERS/Kim Hong-Ji/File Photo
FILE PHOTO: The logo of Samsung Electronics is seen at the company's store in Seoul, South Korea, April 15, 2025. REUTERS/Kim Hong-Ji/File Photo

Samsung Electronics, the world's No. 1 TV maker, has replaced its TV head for the first time in more than two years, as it faces mounting competition from Chinese rivals at home and abroad.

Samsung said in a statement on Monday that it has appointed Lee Won-jin, who was previously head of the Global Marketing Office, ⁠as the new ⁠head of its Visual Display Business, succeeding Yong Seok-woo, who will serve as an adviser.

Samsung usually carries out its annual management reshuffle around December, and the company did not disclose the ⁠reason for the replacement.

A Samsung Electronics official told Reuters the new leader is expected to bring a fresh perspective and the change needed for the TV business, which is facing intensifying market competition.

In March, China's TCL Electronics and Japan's Sony signed binding agreements for a strategic partnership in the home entertainment field, increasing pressure on rivals.

The ⁠Nikkei ⁠newspaper previously reported Samsung was considering discontinuing sales of home appliances and TVs in China within this year in the face of competition from Chinese companies that have undercut rivals.

Samsung said last month its TV profit declined in the first quarter because of stagnating demand and rising raw-material costs. Lee had previously worked at Google before moving to Samsung in 2014.