Old Twitter Vs X: Israel-Gaza War Spotlights 'Information Crisis'

Users of X, formerly Twitter, complain they can no longer decipher truth, as captured by news photographers, from fiction on the site. MOHAMMED ABED / AFP/File
Users of X, formerly Twitter, complain they can no longer decipher truth, as captured by news photographers, from fiction on the site. MOHAMMED ABED / AFP/File
TT

Old Twitter Vs X: Israel-Gaza War Spotlights 'Information Crisis'

Users of X, formerly Twitter, complain they can no longer decipher truth, as captured by news photographers, from fiction on the site. MOHAMMED ABED / AFP/File
Users of X, formerly Twitter, complain they can no longer decipher truth, as captured by news photographers, from fiction on the site. MOHAMMED ABED / AFP/File

Twitter won fame in the Arab uprisings nearly a decade ago as a pivotal source for real-time crisis information, but that reputation has withered after the platform's transformation into a magnet for hate speech and disinformation under Elon Musk.

Historically, Twitter's greatest strength was as a tool for gathering and disseminating life-saving information and coordinating emergency relief during times of crisis. Its old-school verification system meant sources and news were widely trusted, said AFP.

Now the platform, renamed X by new owner Musk, has gutted content moderation, restored accounts of previously banned extremists, and allowed users simply to purchase account verification, helping them profit from viral -- but often inaccurate -- posts.

The fast-evolving Israel-Gaza conflict has been widely seen as the first real test of Musk's version of the platform during a major crisis. For many experts, the results confirm their worst fears: that changes have made it a challenge to discern truth from fiction.

"It is sobering, though not surprising, to see Musk's reckless decisions exacerbate the information crisis on Twitter surrounding the already tragic Israel-Hamas conflict," Nora Benavidez, senior counsel at the watchdog Free Press, told AFP.

The platform is flooded with violent videos and images -- some real but many fake and mislabeled from entirely different years and places.

Nearly three-fourths of the most viral posts promoting falsehoods about the conflict are being pushed by accounts with verified checkmarks, according to a new study by the watchdog NewsGuard.

In the absence of guardrails, that has made it "very difficult for the public to separate fact from fiction," while escalating "tension and division," Benavidez added.

'Fire hose of information'
That was evident on Tuesday after a deadly strike on a hospital in war-ravaged Gaza, as ordinary users scrambling for real-time information vented frustration that the site had become unusable.

Confusion reigned as fake accounts with verified checkmarks shared images of past conflicts while peddling hasty conclusions of unverified videos, illustrating how the platform had handed the megaphone to paying subscribers, irrespective of accuracy.

Accounts masquerading as official sources or news media stoked passions with inflammatory content.

Misinformation researchers warned that many users were treating an account of an activist group called "Israel war room," stamped with a gold checkmark –- indicating "an official organization account," according to X –- as a supposedly official Israeli source.

India-based bot accounts known for anti-Muslim rhetoric further muddied the waters by pushing false anti-Palestinian narratives, researchers said.

Meanwhile, Al Jazeera warned that it had "no ties" to an account that falsely claimed affiliation to the Middle East broadcaster as it urged its followers to "exercise caution."

"It has become incredibly challenging to navigate the fire hose of information -- there is a relentless news cycle, push for clicks, and amplification of noise," Michelle Ciulla Lipkin, head of the National Association for Media Literacy Education, told AFP.

"Now it's clear Musk sees X not as a reliable information source but just another of his business ventures."

The chaos stands in sharp contrast to the 2011 Arab uprisings that prompted a surge of optimism in the Middle East about the potential of the platform to spread authentic information, mobilize communities and elevate democratic ideals.

'Break the glass'
The breakdown of the site's basic functionality threatens to impede or disrupt the humanitarian response, experts warn.

Humanitarian organizations have typically relied on such platforms to assess needs, prepare logistical plans and assess whether an area was safe to dispatch first responders. And human rights researchers use social media data to conduct investigations into possible war crimes, said Alessandro Accorsi, a senior analyst at the Crisis Group.

"The flood of misinformation and the limitations that X put in place for access to their API," which allow third-party developers to gather the social platform's data, had complicated those efforts, Accorsi told AFP.

X did not respond to AFP's request for comment.

The company's chief executive Linda Yaccarino has signaled that the platform was still serious about trust and safety, insisting that users were free to adjust their account settings to enable real-time sharing of information.

But researchers voiced pessimism, saying the site has abandoned efforts to elevate top news sources. Instead, a new ad revenue sharing program with content creators incentivizes extreme content designed to boost engagement, critics say.

Pat de Brun, head of Big Tech Accountability at Amnesty International said X should use every tool available, including deploying so-called "break the glass measures" aimed at dampening the spread of falsehoods and hate-speech.

"Platforms have clear responsibilities under international human rights standards," he told AFP.

"These responsibilities are heightened in times of crisis and conflict."



Meta Abruptly Ends US Fact-checks Ahead of Trump Term

Attendees visit the Meta booth at the Game Developers Conference in San Francisco on March 22, 2023. (AP)
Attendees visit the Meta booth at the Game Developers Conference in San Francisco on March 22, 2023. (AP)
TT

Meta Abruptly Ends US Fact-checks Ahead of Trump Term

Attendees visit the Meta booth at the Game Developers Conference in San Francisco on March 22, 2023. (AP)
Attendees visit the Meta booth at the Game Developers Conference in San Francisco on March 22, 2023. (AP)

Social media giant Meta on Tuesday slashed its content moderation policies, including ending its US fact-checking program on Facebook and Instagram, in a major shift that conforms with the priorities of incoming president Donald Trump.

"We're going to get rid of fact-checkers (that) have just been too politically biased and have destroyed more trust than they've created, especially in the US," Meta founder and CEO Mark Zuckerberg said in a post.

Instead, Meta platforms including Facebook and Instagram, "would use community notes similar to X (formerly Twitter), starting in the US," he added.

Meta's surprise announcement echoed long-standing complaints made by Trump's Republican Party and X owner Elon Musk about fact-checking that many conservatives see as censorship.

They argue that fact-checking programs disproportionately target right-wing voices, which has led to proposed laws in states like Florida and Texas to limit content moderation.

"This is cool," Musk posted on his X platform after the announcement.

Zuckerberg, in a nod to Trump's victory, said that "recent elections feel like a cultural tipping point towards, once again, prioritizing speech" over moderation.

The shift came as the 40-year-old tycoon has been making efforts to reconcile with Trump since his election in November, including donating one million dollars to his inauguration fund.

Trump has been a harsh critic of Meta and Zuckerberg for years, accusing the company of bias against him.

The Republican was kicked off Facebook following the January 6, 2021, attack on the US Capitol by his supporters, though the company restored his account in early 2023.

Zuckerberg, like several other tech leaders, has met with Trump at his Mar-a-Lago resort in Florida ahead of his January 20 inauguration.

Meta in recent days has taken other gestures likely to please Trump's team, such as appointing former Republican official Joel Kaplan to head up public affairs at the company.

He takes over from Nick Clegg, a former British deputy prime minister.

Zuckerberg also named Ultimate Fighting Championship (UFC) head Dana White, a close ally of Trump, to the Meta board.

Kaplan, in a statement Tuesday, insisted the company's approach to content moderation had "gone too far."

"Too much harmless content gets censored, too many people find themselves wrongly locked up in 'Facebook jail,'" he said.

As part of the overhaul, Meta said it will relocate its trust and safety teams from liberal California to more conservative Texas.

"That will help us build trust to do this work in places where there is less concern about the bias of our teams," Zuckerberg said.

Zuckerberg also took a shot at the European Union "that has an ever increasing number of laws institutionalizing censorship and making it difficult to build anything innovative there."

The remark referred to new laws in Europe that require Meta and other major platforms to maintain content moderation standards or risk hefty fines.

Zuckerberg said that Meta would "work with President Trump to push back against foreign governments going after American companies to censor more."

Additionally, Meta announced it would reverse its 2021 policy of reducing political content across its platforms.

Instead, the company will adopt a more personalized approach, allowing users greater control over the amount of political content they see on Facebook, Instagram, and Threads.

AFP currently works in 26 languages with Facebook's fact-checking program, in which Facebook pays to use fact-checks from around 80 organizations globally on its platform, WhatsApp and on Instagram.

In that program, content rated "false" is downgraded in news feeds so fewer people will see it and if someone tries to share that post, they are presented with an article explaining why it is misleading.

Community Notes on X (formerly Twitter) allows users to collaboratively add context to posts in a system that aims to distill reliable information through consensus rather than top-down moderation.

Meta's move into fact-checking came in the wake of Trump's shock election in 2016, which critics said was enabled by rampant disinformation on Facebook and interference by foreign actors like Russia on the platform.