Mark Zuckerberg Set to Testify in Watershed Social Media Trial 

Meta's CEO Mark Zuckerberg testifies during the Senate Judiciary Committee hearing on online child sexual exploitation at the US Capitol in Washington, US, January 31, 2024. (Reuters)
Meta's CEO Mark Zuckerberg testifies during the Senate Judiciary Committee hearing on online child sexual exploitation at the US Capitol in Washington, US, January 31, 2024. (Reuters)
TT

Mark Zuckerberg Set to Testify in Watershed Social Media Trial 

Meta's CEO Mark Zuckerberg testifies during the Senate Judiciary Committee hearing on online child sexual exploitation at the US Capitol in Washington, US, January 31, 2024. (Reuters)
Meta's CEO Mark Zuckerberg testifies during the Senate Judiciary Committee hearing on online child sexual exploitation at the US Capitol in Washington, US, January 31, 2024. (Reuters)

Mark Zuckerberg will testify in an unprecedented social media trial that questions whether Meta's platforms deliberately addict and harm children.

Meta's CEO is expected to answer tough questions on Wednesday from attorneys representing a now 20-year-old woman identified by the initials KGM, who claims her early use of social media addicted her to the technology and exacerbated depression and suicidal thoughts. Meta Platforms and Google’s YouTube are the two remaining defendants in the case, which TikTok and Snap have settled.

Zuckerberg has testified in other trials and answered questions from Congress about youth safety on Meta's platforms, and he apologized to families at that hearing whose lives had been upended by tragedies they believed were because of social media.

This trial, though, marks the first time Zuckerberg will answer similar questions in front of a jury. and, again, bereaved parents are expected to be in the limited courtroom seats available to the public.

The case, along with two others, has been selected as a bellwether trial, meaning its outcome could impact how thousands of similar lawsuits against social media companies would play out.

A Meta spokesperson said the company strongly disagrees with the allegations in the lawsuit and said they are “confident the evidence will show our longstanding commitment to supporting young people.”

One of Meta's attorneys, Paul Schmidt, said in his opening statement that the company is not disputing that KGM experienced mental health struggles, but rather that Instagram played a substantial factor in those struggles.

He pointed to medical records that showed a turbulent home life, and both he and an attorney representing YouTube argue she turned to their platforms as a coping mechanism or a means of escaping her mental health struggles.

Zuckerberg's testimony comes a week after that of Adam Mosseri, the head of Meta's Instagram, who said in the courtroom that he disagrees with the idea that people can be clinically addicted to social media platforms.

Mosseri maintained that Instagram works hard to protect young people using the service, and said it's “not good for the company, over the long run, to make decisions that profit for us but are poor for people’s well-being."

Much of Mosseri's questioning from the plaintiff's lawyer, Mark Lanier, centered on cosmetic filters on Instagram that changed people’s appearance — a topic that Lanier is sure to revisit with Zuckerberg.

He is also expected to face questions about Instagram’s algorithm, the infinite nature of Meta’ feeds and other features the plaintiffs argue are designed to get users hooked.



ByteDance Reportedly Suspends Launch of Video AI Model after Copyright Disputes

FILE PHOTO: The ByteDance logo is seen at the company's office building in Shanghai, China July 4, 2023. REUTERS/Aly Song/File Photo
FILE PHOTO: The ByteDance logo is seen at the company's office building in Shanghai, China July 4, 2023. REUTERS/Aly Song/File Photo
TT

ByteDance Reportedly Suspends Launch of Video AI Model after Copyright Disputes

FILE PHOTO: The ByteDance logo is seen at the company's office building in Shanghai, China July 4, 2023. REUTERS/Aly Song/File Photo
FILE PHOTO: The ByteDance logo is seen at the company's office building in Shanghai, China July 4, 2023. REUTERS/Aly Song/File Photo

TikTok's Chinese parent, ByteDance, has put on hold the global launch of its latest video-generation model, Seedance 2.0, after a series of copyright disputes with major Hollywood studios and streaming platforms, The Information reported on Saturday, citing two people with direct knowledge of the situation.

Reuters could not immediately verify the report. ByteDance did not immediately respond to a request for comment. ByteDance said last month it would take steps to prevent the unauthorized use of intellectual property on its AI video generator Seedance 2.0, ⁠following threats of ⁠legal action from US studios, including Disney.

Disney sent a cease-and-desist letter to the Chinese firm last month, accusing it of using Disney characters to train and power Seedance 2.0 without permission, after videos generated by the model went viral in China, including one of Tom Cruise ⁠and Brad Pitt in a fight.

Disney said ByteDance had pre-packaged Seedance with a pirated library of copyrighted characters from franchises including Star Wars and Marvel, portraying them as public-domain clip art. ByteDance, which officially unveiled the model in February, has said the system is aimed at professional film, e-commerce and advertising use, highlighting its ability to process text, images, audio and video at once to reduce content production costs.

Seedance 2.0 has drawn attention after earning comparisons with DeepSeek, a ⁠Chinese AI ⁠company that has built models rivaling those of Anthropic and OpenAI. Tech executives, including Elon Musk, have praised its ability to generate cinematic storylines from a handful of prompts.

ByteDance had been aiming to make the new video model available to customers worldwide in mid-March, but the company has since suspended those plans, The Information report said.

ByteDance's legal team is working to identify and resolve potential legal issues and engineers are adding safeguards to prevent the model from generating content that could lead to further intellectual property violations, the report added.


Fake Images, Videos in Wartime: How to Tell Fact from Deepfakes

Misinformation spreads rapidly on social media during crises and conflicts (Shutterstock)
Misinformation spreads rapidly on social media during crises and conflicts (Shutterstock)
TT

Fake Images, Videos in Wartime: How to Tell Fact from Deepfakes

Misinformation spreads rapidly on social media during crises and conflicts (Shutterstock)
Misinformation spreads rapidly on social media during crises and conflicts (Shutterstock)

As tensions escalate across several fronts in the Middle East, information is spreading almost as quickly as the events themselves.

Social media platforms are often the first place where images, videos, and reports of alleged attacks or military developments appear.

But alongside legitimate information, a wave of misleading or fabricated content is also circulating online, making it increasingly difficult to separate fact from fiction.

A Growing Digital Challenge

Cybersecurity experts warn that the rapid spread of misinformation, particularly through manipulated videos and deepfake technologies, has become a growing digital threat during periods of geopolitical instability.

Maher Yamout, Lead Security Researcher at Kaspersky, told Asharq Al-Awsat that distinguishing reliable information from false narratives becomes especially critical during emergencies, when emotions run high, and people tend to share content quickly without verifying it.

“With developments unfolding in the Middle East, government authorities in Gulf Cooperation Council countries have warned against publishing or circulating information from unknown sources,” he said.

“Fake news, misleading or inaccurate information presented as real news, becomes more dangerous during emergencies.”

Misinformation Spreads Fast

Fake news is not new, but its scale and speed have changed dramatically with the rise of social media and artificial intelligence tools. During periods of geopolitical tension, unverified reports or manipulated videos can spread within minutes, reaching millions before fact-checkers can respond.

Experts generally divide fake news into two main categories. The first involves fully fabricated content designed to influence public opinion or attract traffic to specific websites. The second contains elements of truth but presents them inaccurately because the author failed to verify all the facts or exaggerated certain details.

Both can confuse audiences during crises, particularly when users rely on social media rather than trusted news outlets for updates.

Authorities in several countries have also warned that sharing inaccurate information, even unintentionally, may expose users to legal accountability.

Governments and digital security experts are therefore urging greater digital awareness and responsibility when sharing information during sensitive periods.

AI-Powered Deception

Artificial intelligence has added a new layer to the misinformation problem through so-called deepfake technologies, fabricated videos created using machine learning techniques such as face swapping or synthetic visual generation.

In some cases, authentic footage can be altered to appear as if it documents events that never occurred.

Yamout said verifying information has become more important than ever with the spread of deepfakes.

“Artificial intelligence makes it possible to combine different video clips to produce new scenes showing events or actions that never happened in reality, often with highly realistic results,” he said.

Such technology can make manipulated videos appear convincing and potentially mislead users, especially when they circulate in emotionally charged contexts. Edited clips may appear to show attacks, military movements, or political statements that never took place.

Even when these videos are later debunked, their initial spread can still trigger confusion or public anxiety.

How to Verify Information

Cybersecurity experts say users themselves play a key role in limiting the spread of misinformation. While platforms and regulators are developing tools to detect fake content, individuals can take simple steps to verify information before sharing it.

The first step is checking the source. Websites that publish false information may contain spelling errors in their web addresses or use unusual domains that mimic well-known media outlets.

Yamout advises carefully reviewing the website address and checking the “About Us” section on unfamiliar sites. It is generally safer to rely on official sources such as government websites or trusted media organizations.

Users should also verify the identity of the author or the organization behind the report. If the author is unknown or lacks clear expertise in the subject, the information should be treated cautiously.

Comparing reports with other credible sources is also important. Professional news organizations follow editorial guidelines and verification procedures, meaning major events are typically reported by multiple reputable outlets.

Yamout also highlighted the importance of checking dates and timelines, noting that some misleading content recirculates old events and presents them as recent developments.

He added that social media algorithms can create so-called “echo chambers,” where users are shown content that aligns with their existing views and interests. This makes it essential to consult diverse and reliable sources before forming conclusions.

Playing on Emotions

Many fake news stories are designed to provoke strong emotional reactions. Sensational headlines or dramatic clips are often crafted to trigger fear, anger, or shock, emotions that increase the likelihood that users will quickly share the content.

“Many fake news stories are written in a clever way to provoke strong emotional reactions,” Yamout said.

Maintaining critical thinking and asking a simple question — why was this story written? — can help users avoid spreading misinformation, he added.

This dynamic is amplified on social media platforms, where algorithms tend to promote content that generates strong engagement. Emotionally charged posts can therefore spread faster than balanced reporting.

Spotting Signs of Manipulation

Images and videos themselves may provide clues that they have been altered. Edited photos may display distorted background lines, unnatural shadows, or unrealistic skin tones.

In manipulated videos, inconsistencies may appear in lighting, eye movement, or facial expressions. While these signs are not always easy to detect, particularly on smartphones, they can raise doubts about the authenticity of widely shared clips.

A Shared Digital Responsibility

Experts say limiting the spread of misinformation during crises requires cooperation among governments, technology companies, media organizations, and users.

Yamout said the simplest rule may also be the most effective: “If you are not sure the content is accurate, do not share it.”

Responsible sharing can help curb the spread of misinformation and protect digital communities.

As digital platforms continue to shape how information travels across borders, the ability to critically evaluate online content is becoming an essential skill.

During periods of geopolitical tension and conflict, when rumors and facts can blur, the challenge is not only cybersecurity but also protecting the credibility of information itself.


Adobe Shares Drop after CEO Exit Adds to AI-disruption Concerns

FILE PHOTO: Signage for Adobe is displayed at National Retail Federation (NRF) 2026: Retail's Big Show, in New York City, US, January 12, 2026. REUTERS/Kylie Cooper/File Photo
FILE PHOTO: Signage for Adobe is displayed at National Retail Federation (NRF) 2026: Retail's Big Show, in New York City, US, January 12, 2026. REUTERS/Kylie Cooper/File Photo
TT

Adobe Shares Drop after CEO Exit Adds to AI-disruption Concerns

FILE PHOTO: Signage for Adobe is displayed at National Retail Federation (NRF) 2026: Retail's Big Show, in New York City, US, January 12, 2026. REUTERS/Kylie Cooper/File Photo
FILE PHOTO: Signage for Adobe is displayed at National Retail Federation (NRF) 2026: Retail's Big Show, in New York City, US, January 12, 2026. REUTERS/Kylie Cooper/File Photo

Adobe's shares plunged 9% in premarket trading on Friday after the Photoshop maker said CEO Shantanu Narayen would step down after 18 years at the helm, unsettling investors already wary of AI-driven disruptions to the design software market.

The longtime CEO's exit comes at a critical juncture as Adobe works to reassure investors it can keep pace with sweeping changes brought by artificial intelligence in the software landscape.

It follows a broader slide in software stocks after fears that ⁠AI agents could ⁠supplant some traditional applications that led to a nearly $1 trillion rout in software stocks globally last month.

"The loss of an iconic leader at a time of peak uncertainty around the future of software more broadly, and the positioning of Adobe ⁠specifically in this new GenAI world is bound to further investor uncertainty and anxiety around the shares," said analysts at Morgan Stanley.

Adobe's shares are down about 23% so far this year, extending a slide that has stretched over the past two years.

The company, which makes Illustrator, Premiere Pro and other tools for creative professionals, is among a group of SaaS providers including Salesforce that have ⁠struggled to win ⁠new clients amid a wave of AI start-ups.

On Thursday, Adobe reported double-digit growth in total revenue and customer subscription segments in the first quarter, reflecting resilient spending on its product suite.

"After steering the Adobe ship through rough seas over the past several years, several data points from the most recent quarter suggest the captain (Narayen) may have brought this franchise into a safe harbor, from which it can continue to thrive," Morgan Stanley analysts said.