Why TikTok's Security Risks Keep Raising Fears

The TikTok app logo is seen in this illustration taken, August 22, 2022. (Reuters)
The TikTok app logo is seen in this illustration taken, August 22, 2022. (Reuters)
TT

Why TikTok's Security Risks Keep Raising Fears

The TikTok app logo is seen in this illustration taken, August 22, 2022. (Reuters)
The TikTok app logo is seen in this illustration taken, August 22, 2022. (Reuters)

TikTok is once again fending off claims that its Chinese parent company, ByteDance, would share user data from its popular video-sharing app with the Chinese government, or push propaganda and misinformation on its behalf.

China’s Foreign Ministry on Wednesday accused the United States itself of spreading disinformation about TikTok's potential security risks following a report in the Wall Street Journal that the Committee on Foreign Investment in the US — part of the Treasury Department — was threatening a US ban on the app unless its Chinese owners divest their stake, The Associated Press said.

So are the data security risks real? And should users be worried that the TikTok app will be wiped off their phones?

Here’s what to know:

WHAT ARE THE CONCERNS ABOUT TIKTOK?

Both the FBI and the Federal Communications Commission have warned that ByteDance could share TikTok user data — such as browsing history, location and biometric identifiers — with China’s authoritarian government.

A law implemented by China in 2017 requires companies to give the government any personal data relevant to the country’s national security. There’s no evidence that TikTok has turned over such data, but fears abound due to the vast amount of user data it, like other social media companies, collects.

Concerns around TikTok were heightened in December when ByteDance said it fired four employees who accessed data on two journalists from Buzzfeed News and The Financial Times while attempting to track down the source of a leaked report about the company.

HOW IS THE US RESPONDING?

White House National Security Council spokesperson John Kirby declined to comment when asked Thursday to address the Chinese foreign ministry's comments about TikTok, citing the review being conducted by the Committee on Foreign Investment.

Kirby also could not confirm that the administration sent TikTok a letter warning that the US government may ban the application if its Chinese owners don’t sell its stake but added, “we have legitimate national security concerns with respect to data integrity that we need to observe.”

In 2020, then-President Donald Trump and his administration sought to force ByteDance to sell off its US assets and ban TikTok from app stores. Courts blocked the effort, and President Joe Biden rescinded Trump’s orders but ordered an in-depth study of the issue. A planned sale of TikTok’s US assets was also shelved as the Biden administration negotiated a deal with TikTok that would address some of the national security concerns.

In Congress, US Sens. Richard Blumenthal and Jerry Moran, a Democrat and a Republican, wrote a letter in February to Treasury Secretary Janet Yellen urging the Committee on Foreign Investment panel, which she chairs, to “swiftly conclude its investigation and impose strict structural restrictions” between TikTok's American operations and ByteDance, including potentially separating the companies.

At the same time, lawmakers have introduced measures that would expand the Biden administration's authority to enact a national ban on TikTok. The White House has already backed a Senate proposal that has bipartisan support.

HOW HAS TIKTOK ALREADY BEEN RESTRICTED?

On Thursday, British authorities said they are banning TikTok on government-issued phones on security grounds, following similar moves by the European Union’s executive branch, which temporarily banned TikTok from employee phones. Denmark and Canada have also announced efforts to block it on government-issued phones.

Last month, the White House said it would give US federal agencies 30 days to delete TikTok from all government-issued mobile devices. Congress, the US armed forces and more than half of US states had already banned the app.

WHAT DOES TIKTOK SAY?

TikTok spokesperson Maureen Shanahan said the company was already answering security concerns through “transparent, US-based protection of US user data and systems, with robust third-party monitoring, vetting, and verification.”

In June, TikTok said it would route all data from US users to servers controlled by Oracle, the Silicon Valley company it chose as its US tech partner in 2020 in an effort to avoid a nationwide ban. But it is storing backups of the data in its own servers in the US and Singapore. The company said it expects to delete US user data from its own servers, but it has not provided a timeline as to when that would occur.

TikTok CEO Shou Zi Chew is set to testify next week before the House Energy and Commerce Committee about the company’s privacy and data-security practices, as well as its relationship with the Chinese government.

Meanwhile, TikTok’s parent company ByteDance has been trying to position itself as more of an international company -- and less of a Chinese company that was founded in Beijing in 2012 by its current chief executive Liang Rubo and others.

Theo Bertram, TikTok’s vice president of policy in Europe, said in a Tweet Thursday that ByteDance “is not a Chinese company.” Bertram said its ownership consists of 60% by global investors, 20% employees and 20% founders. Its leaders are based in cities like Singapore, New York, Beijing and other metropolitan areas.

ARE THE SECURITY RISKS LEGITIMATE?

It depends on who you ask.

Some tech privacy advocates say while the potential abuse of privacy by the Chinese government is concerning, other tech companies have data-harvesting business practices that also exploit user information.

“If policy makers want to protect Americans from surveillance, they should advocate for a basic privacy law that bans all companies from collecting so much sensitive data about us in the first place, rather than engaging in what amounts to xenophobic showboating that does exactly nothing to protect anyone,” said Evan Greer, director of the nonprofit advocacy group Fight for the Future.

Karim Farhat, a researcher with the Internet Governance Project at Georgia Tech, said a TikTok sale would be “completely irrelevant to any of the alleged ‘national security’ threats” and go against “every free market principle and norm” of the state department’s internet freedom principles.

Others say there is legitimate reason for concern.

People who use TikTok might think they’re not doing anything that would be of interest to a foreign government, but that’s not always the case, said Anton Dahbura, executive director of the Johns Hopkins University Information Security Institute. Important information about the United States is not strictly limited to nuclear power plants or military facilities; it extends to other sectors, such as food processing, the finance industry and universities, Dahbura said.

IS THERE PRECEDENCE FOR BANNING TECH COMPANIES?

Last year, the US banned the sale of communications equipment made by Chinese companies Huawei and ZTE, citing risks to national security. But banning the sale of items could be more easily done than banning an app, which is accessed through the web.

Such a move might also go to the courts on grounds that it might violate the First Amendment as some civil liberties groups have argued.



OpenAI Starts Testing Ads in ChatGPT

The OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
The OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
TT

OpenAI Starts Testing Ads in ChatGPT

The OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
The OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)

OpenAI has begun placing ads in the basic versions of its ChatGPT chatbot, a bet that users will not mind the interruptions as the company seeks revenue as its costs soar.

"The test will be for logged-in adult users on the Free and Go subscription tiers" in the United States, OpenAI said Monday. The Go subscription costs $8 in the United States.

Only a small percentage of its nearly one billion users pay for its premium subscription services, which will remain ad-free.

"Ads do not influence the answers ChatGPT gives you, and we keep your conversations with ChatGPT private from advertisers," the company said.

Since ChatGPT's launch in 2022, OpenAI's valuation has soared to $500 billion in funding rounds -- higher than any other private company. Some analysts expect it could go public with a trillion-dollar valuation.

But the ChatGPT maker burns through cash at a furious rate, mostly on the powerful computing required to deliver its services.

Its chief executive Sam Altman had long expressed his dislike for advertising, citing concerns that it could create distrust about ChatGPT's content.

His about-face garnered a jab from its rival Anthropic over the weekend, which made its advertising debut at the Super Bowl championship with commercials saying its Claude chatbot would stay ad-free.


Social Media ‘Addicting the Brains of Children,’ Plaintiff’s Lawyer Argues in Landmark Trial

Teenagers pose for a photo while holding smartphones in front of a Meta logo in this illustration taken September 11, 2025. (Reuters)
Teenagers pose for a photo while holding smartphones in front of a Meta logo in this illustration taken September 11, 2025. (Reuters)
TT

Social Media ‘Addicting the Brains of Children,’ Plaintiff’s Lawyer Argues in Landmark Trial

Teenagers pose for a photo while holding smartphones in front of a Meta logo in this illustration taken September 11, 2025. (Reuters)
Teenagers pose for a photo while holding smartphones in front of a Meta logo in this illustration taken September 11, 2025. (Reuters)

Comparing social media platforms to casinos and addictive drugs, lawyer Mark Lanier delivered opening statements Monday in a landmark trial in Los Angeles that seeks to hold Instagram owner Meta and Google's YouTube responsible for harms to children who use their products.

Instagram's parent company Meta and Google's YouTube face claims that their platforms addict children through deliberate design choices that keep kids glued to their screens. TikTok and Snap, which were originally named in the lawsuit, settled for undisclosed sums.

Jurors got their first glimpse into what will be a lengthy trial characterized by dueling narratives from the plaintiffs and the two remaining defendants.

Meta lawyer Paul Schmidt spoke of the disagreement within the scientific community over social media addiction, with some researchers believing it doesn’t exist, or that addiction is not the most appropriate way to describe heavy social media use.

‘Addicting the brains of children’

Lanier, the plaintiff's lawyer, delivered lively first remarks where he said the case will be as “easy as ABC” — which stands for “addicting the brains of children.” He said Meta and Google, “two of the richest corporations in history,” have “engineered addiction in children’s brains.”

He presented jurors with a slew of internal emails, documents and studies conducted by Meta and YouTube, as well as YouTube’s parent company, Google. He emphasized the findings of a study Meta conducted called “Project Myst” in which they surveyed 1,000 teens and their parents about their social media use.

The two major findings, Lanier said, were that Meta knew children who experienced “adverse events” like trauma and stress were particularly vulnerable for addiction; and that parental supervision and controls made little impact.

He also highlighted internal Google documents that likened some company products to a casino, and internal communication between Meta employees in which one person said Instagram is “like a drug” and they are “basically pushers.”

At the core of the Los Angeles case is a 20-year-old identified only by the initials “KGM,” whose case could determine how thousands of other, similar lawsuits against social media companies will play out. She and two other plaintiffs have been selected for bellwether trials — essentially test cases for both sides to see how their arguments play out before a jury.

Plaintiff grew up using YouTube, Instagram

KGM made a brief appearance after a break during Lanier’s statement and she will return to testify later in the trial. Lanier spent time describing KGM's childhood, focusing particularly on what her personality was like before she began using social media.

She started using YouTube at age 6 and Instagram at age 9, Lanier said. Before she graduated elementary school, she had posted 284 videos on YouTube.

The outcome of the trial could have profound effects on the companies' businesses and how they will handle children using their platforms.

Lanier said the companies’ lawyers will “try to blame the little girl and her parents for the trap they built,” referencing the plaintiff. She was a minor when she said she became addicted to social media, which she claims had a detrimental impact on her mental health.

Lanier said that despite the public position of Meta and YouTube being that they work to protect children, their internal documents show an entirely different position, with explicit references to young children being listed as their target audiences.

The attorney also drew comparisons between the social media companies and tobacco firms, citing internal communication between Meta employees who were concerned about the company’s lack of proactive action about the potential harm their platforms can have on children and teens.

“For a teenager, social validation is survival,” Lanier said. The defendants “engineered a feature that caters to a minor’s craving for social validation,” he added, speaking about “like” buttons and similar features.

Meta pushes back

In his opening statement representing Meta, Schmidt said the core question in the case is whether the platforms were a substantial factor in KGM’s mental health struggles. He spent much of his time going through the plaintiff’s health records, emphasizing that she had experienced many difficult circumstances in her childhood, including emotional abuse, body image issues and bullying.

Schmidt presented a clip from a video deposition from one of KGM‘s mental health providers, Dr. Thomas Suberman, who said social media was “not the through-line of what I recall being her main issues,” adding that her struggles seemed to largely stem from interpersonal conflicts and relationships.

He painted a picture — with KGM’s own text messages and testimony pointing to a volatile home life — of a particularly troubled relationship with her mother.

Schmidt acknowledged that many mental health professionals do believe social media addiction can exist, but said three of KGM’s providers — all of whom believe in the form of addiction — have never diagnosed her with it, or treated her for it.

Schmidt stressed to the jurors that the case is not about whether social media is a good thing or whether teens spend too much time on their phones or whether the jurors like or dislike Meta, but whether social media was a substantial factor in KGM’s mental health struggles.

A reckoning for social media and youth harms

A slew of trials beginning this year seek to hold social media companies responsible for harming children's mental well-being. Executives, including Meta CEO Mark Zuckerberg, are expected to testify at the Los Angeles trial, which will last six to eight weeks.

Experts have drawn similarities to the Big Tobacco trials that led to a 1998 settlement requiring cigarette companies to pay billions in health care costs and restrict marketing targeting minors.

A separate trial in New Mexico, meanwhile, also kicked off with opening statements on Monday. In that trial, Meta is accused of failing to protect young users from sexual exploitation, following an undercover online investigation. Attorney General Raúl Torrez in late 2023 sued Meta and Zuckerberg, who was later dropped from the suit.

A federal bellwether trial beginning in June in Oakland, California, will be the first to represent school districts that have sued social media platforms over harms to children.

In addition, more than 40 state attorneys general have filed lawsuits against Meta, claiming it is harming young people and contributing to the youth mental health crisis by deliberately designing features on Instagram and Facebook that addict children to its platforms. The majority of cases filed their lawsuits in federal court, but some sued in their respective states.

TikTok also faces similar lawsuits in more than a dozen states.


AI No Better Than Other Methods for Patients Seeking Medical Advice, Study Shows

AI (Artificial Intelligence) letters and a robot hand are placed on a computer motherboard in this illustration created on June 23, 2023. (Reuters)
AI (Artificial Intelligence) letters and a robot hand are placed on a computer motherboard in this illustration created on June 23, 2023. (Reuters)
TT

AI No Better Than Other Methods for Patients Seeking Medical Advice, Study Shows

AI (Artificial Intelligence) letters and a robot hand are placed on a computer motherboard in this illustration created on June 23, 2023. (Reuters)
AI (Artificial Intelligence) letters and a robot hand are placed on a computer motherboard in this illustration created on June 23, 2023. (Reuters)

Asking AI about medical symptoms does not help patients make better decisions about their health than other methods, such as a standard internet search, according to a new study published in Nature Medicine.

The authors said the study was important as people were increasingly turning to AI and chatbots for advice on their health, but without evidence that this was necessarily the best and safest approach.

Researchers led by the University of Oxford’s Internet Institute worked alongside a group of doctors to draw up 10 different medical scenarios, ranging from a common cold to a life-threatening hemorrhage causing bleeding on the brain.

When tested without human participants, three large-language models – Open AI's Chat GPT-4o, ‌Meta's Llama ‌3 and Cohere's Command R+ – identified the conditions in ‌94.9% ⁠of cases, ‌and chose the correct course of action, like calling an ambulance or going to the doctor, in an average of 56.3% of cases. The companies did not respond to requests for comment.

'HUGE GAP' BETWEEN AI'S POTENTIAL AND ACTUAL PERFORMANCE

The researchers then recruited 1,298 participants in Britain to either use AI, or their usual resources like an internet search, or their experience, or the National Health Service website to ⁠investigate the symptoms and decide their next step.

When the participants did this, relevant conditions were identified in ‌less than 34.5% of cases, and the right ‍course of action was given in ‍less than 44.2%, no better than the control group using more traditional ‍tools.

Adam Mahdi, co-author of the paper and associate professor at Oxford, said the study showed the “huge gap” between the potential of AI and the pitfalls when it was used by people.

“The knowledge may be in those bots; however, this knowledge doesn’t always translate when interacting with humans,” he said, meaning that more work was needed to identify why this was happening.

HUMANS OFTEN GIVING INCOMPLETE INFORMATION

The ⁠team studied around 30 of the interactions in detail, and concluded that often humans were providing incomplete or wrong information, but the LLMs were also sometimes generating misleading or incorrect responses.

For example, one patient reporting the symptoms of a subarachnoid hemorrhage – a life-threatening condition causing bleeding on the brain – was correctly told by AI to go to hospital after describing a stiff neck, light sensitivity and the "worst headache ever". The other described the same symptoms but a "terrible" headache, and was told to lie down in a darkened room.

The team now plans a similar study in different countries and languages, and over time, to test if that impacts AI’s performance.

The ‌study was supported by the data company Prolific, the German non-profit Dieter Schwarz Stiftung, and the UK and US governments.