Tent Demos Turn West Bank Eviction into Rallying Cry

 Activists confront a settler (left) near the occupied West Bank village of Beit Jala. (AFP)
Activists confront a settler (left) near the occupied West Bank village of Beit Jala. (AFP)
TT

Tent Demos Turn West Bank Eviction into Rallying Cry

 Activists confront a settler (left) near the occupied West Bank village of Beit Jala. (AFP)
Activists confront a settler (left) near the occupied West Bank village of Beit Jala. (AFP)

Flanked by smartphone-wielding peace activists, members of an evicted Palestinian family marched onto land seized by armed Israeli settlers, shouting "Out! Out!" as they livestreamed the confrontation on Instagram.

After Israeli security forces turned them away, they retreated to their makeshift base: a fast-growing tent encampment for supporters of the family -- the Kisiyas -- that has spotlighted their plight amid widening settler attacks in the Israeli-occupied West Bank.

Violence in the West Bank has surged alongside the war in Gaza, with at least 640 Palestinians killed by Israeli troops and settlers since Hamas's October 7 attack, according to an AFP tally based on Palestinian health ministry figures.

At least 19 Israelis have also died in Palestinian attacks during the same period, according to Israeli officials.

Yet weeks of demonstrations at the tent near the Kisiyas' home in Beit Jala, south of Jerusalem, have made their story stand out, attracting anti-settlement activists, lawmakers, rabbis and Palestinians from other communities facing similar incursions.

The daily gatherings feature meals, prayer, singalongs and lessons on non-violent resistance, usually followed by a caravan to the site to demand that the settlers leave.

During one such encounter on Thursday, Kisiya family members grabbed whatever they could -- mattresses, electrical cables, fruit from a pomegranate tree -- while activists tried to tear down settler-erected fences.

On Friday, 70 Israeli Jews held Shabbat services at the encampment and spent the night there.

It is the kind of show of solidarity that was once more common but has become vanishingly rare during the war, organizers said.

"We will stay here until we get back our land," 30-year-old Alice Kisiya told AFP.

The settlers "took advantage of the war. They thought it would end in silence, but it didn't."

- 'Example to show the world' -

Some details of the Kisiyas' story have helped turn it into a rallying cry.

They are one of the area's few Christian families, and the land's stepped agricultural terraces sit in one of its few accessible green spaces.

Yet Knesset member Aida Touma-Suleiman told AFP that while the mobilization around their struggle might be unusual, the challenges the Kisiyas face are common.

"I wish we can be able to stand near each family like this, but maybe this can be an example to show the world what is happening," she said.

Earlier this month, Israel's far-right Finance Minister Bezalel Smotrich announced the approval of a new settlement in the same area of the Kisiya encampment that the United Nations says would encroach on the UNESCO World Heritage site of Battir.

The news drew international outcry, with Washington and the United Nations saying the settlement known as Nahal Heletz would jeopardize the viability of a Palestinian state.

All of Israel's settlements in the West Bank, occupied since 1967, are considered illegal under international law, regardless of whether they have Israeli planning permission.

The Kisiyas have for years been threatened by settlement activity, and in 2019 the civil administration demolished the family's home and restaurant.

The latest run-in occurred on July 31, when settlers from a nearby outpost accompanied by soldiers "raided the land, assaulting members of the Kisiya family and activists trying to force them to leave the area", according to Israeli anti-settlement group Peace Now.

- 'Is it dangerous?' -

The Kisiyas joined with activists to form the encampment just over a week later, although it got off to a slow start.

"I wish there was a camera when we first started. We were just sitting with chairs, had nothing in here. And we were discussing, like, 'What are we doing?'" said Palestinian activist Mai Shahin of Combatants for Peace.

"The first week was really hard," she said, with people, initially hesitant to join the encampment, calling to ask her: "Is it dangerous?"

As it has grown in size, Palestinians from elsewhere have come to see the encampment as a safe space.

"I have a lot of trauma from wearing my own keffiyeh (scarf) and wearing my identity for everyone to see," said Amira Mohammed, 25, of Jerusalem.

In the encampment "we were able to actually be ourselves, wear our keffiyehs, sing our songs in our language with our Israeli counterparts".

But some activists point out that despite the energy in the encampment, the current Israeli government appears set on expanding settlement activity.

"No anti-Israeli and anti-Zionist decision will stop the development of settlements," Smotrich, who himself lives in a settlement, posted on X this month.

"We will continue to fight against the dangerous project of creating a Palestinian state by creating facts on the ground."

Activist Talya Hirsch said such statements leave her with "no hope for this land" and "no vision of a better future".

"But I don't move from this place. I have no hope but I have a high sense of responsibility."



ISIS Supporters Turn to AI to Bolster Online Support

FILE PHOTO: AI (Artificial Intelligence) letters and robot hand miniature in this illustration, taken June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: AI (Artificial Intelligence) letters and robot hand miniature in this illustration, taken June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo
TT

ISIS Supporters Turn to AI to Bolster Online Support

FILE PHOTO: AI (Artificial Intelligence) letters and robot hand miniature in this illustration, taken June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: AI (Artificial Intelligence) letters and robot hand miniature in this illustration, taken June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

Days after a deadly ISIS attack on a Russian concert hall in March, a man clad in military fatigues and a helmet appeared in an online video, celebrating the assault in which more than 140 people were killed.
"ISIS delivered a strong blow to Russia with a bloody attack, the fiercest that hit it in years," the man said in Arabic, according to the SITE Intelligence Group, an organization that tracks and analyzes such online content.
But the man in the video, which the Thomson Reuters Foundation was not able to view independently, was not real - he was created using artificial intelligence, according to SITE and other online researchers.
Federico Borgonovo, a researcher at the Royal United Services Institute, a London-based think tank, traced the AI-generated video to an ISIS supporter active in the group's digital ecosystem.
This person had combined statements, bulletins, and data from ISIS's official news outlet to create the video using AI, Borgonovo explained.
Although ISIS has been using AI for some time, Borgonovo said the video was an "exception to the rules" because the production quality was high even if the content was not as violent as in other online posts.
"It's quite good for an AI product. But in terms of violence and the propaganda itself, it's average," he said, noting however that the video showed how ISIS supporters and affiliates can ramp up production of sympathetic content online.
Digital experts say groups like ISIS and far-right movements are increasingly using AI online and testing the limits of safety controls on social media platforms.
A January study by the Combating Terrorism Center at West Point said AI could be used to generate and distribute propaganda, to recruit using AI-powered chatbots, to carry out attacks using drones or other autonomous vehicles, and to launch cyber-attacks.
"Many assessments of AI risk, and even of generative AI risks specifically, only consider this particular problem in a cursory way," said Stephane Baele, professor of international relations at UCLouvain in Belgium.
"Major AI firms, who genuinely engaged with the risks of their tools by publishing sometimes lengthy reports mapping them, pay scant attention to extremist and terrorist uses."
Regulation governing AI is still being crafted around the world and pioneers of the technology have said they will strive to ensure it is safe and secure.
Tech giant Microsoft, for example, has developed a Responsible AI Standard that aims to base AI development on six principles including fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.
In a special report earlier this year, SITE Intelligence Group's founder and executive director Rita Katz wrote that a range of actors from members of militant group al Qaeda to neo-Nazi networks were capitalizing on the technology.
"It's hard to understate what a gift AI is for terrorists and extremist communities, for which media is lifeblood," she wrote.
CHATBOTS AND CARTOONS
At the height of its powers in 2014, ISIS claimed control over large parts of Syria and Iraq, imposing a reign of terror in the areas it controlled.
Media was a prominent tool in the group's arsenal, and online recruitment has long been vital to its operations.
Despite the collapse of its self-declared “caliphate” in 2017, its supporters and affiliates still preach their doctrine online and try to persuade people to join their ranks.
Last month, a security source told Reuters that France had identified a dozen ISIS-K handlers, based in countries around Afghanistan, who have a strong online presence and are trying to convince young men in European countries, who are interested in joining up with the group overseas, to instead carry out domestic attacks.
ISIS-K is a resurgent wing of ISIS, named after the historical region of Khorasan that included parts of Iran, Afghanistan and Central Asia.
Analysts fear that AI may facilitate and automate the work of such online recruiters.
Daniel Siegel, an investigator at social media research firm Graphika, said his team came across chatbots that mimicked dead or incarcerated ISIS militants.
He told the Thomson Reuters Foundation that it was unclear if the source of the bots was ISIS or its supporters, but the risk they posed was still real.
"Now (ISIS affiliates) can build these real relationships with bots that represent a potential future where a chatbot could encourage them to engage in kinetic violence," Siegel said.
Siegel interacted with some of the bots as part of his research and he found their answers to be generic, but he said that could change as AI tech develops.
"One of the things I am worried about as well is how synthetic media will enable these groups to blend their content that previously existed in silos into our mainstream culture," he added.
That is already happening: Graphika tracked videos of popular cartoon characters, like Rick and Morty and Peter Griffin, singing ISIS anthems on different platforms.
"What this allows the group or sympathizers or affiliates to do is target specific audiences because they know that the regular consumers of Sponge Bob or Peter Griffin or Rick and Morty, will be fed that content through the algorithm," Siegel said.
EXPLOITING PROMPTS
Then there is the danger of ISIS supporters using AI tech to broaden their knowledge of illegal activities.
For its January study, researchers at the Combating Terrorism Center at Westpoint attempted to bypass the security guards of Large Language Models (LLMs) and extract information that could be exploited by malicious actors.
They crafted prompts that requested information on a range of activities from attack planning to recruitment and tactical learning, and the LLMs generated responses that were relevant half of the time.
In one example that they described as "alarming", researchers asked an LLM to help them convince people to donate to ISIS.
"There, the model yielded very specific guidelines on how to conduct a fundraising campaign and even offered specific narratives and phrases to be used on social media," the report said.
Joe Burton a professor of international security at Lancaster University, said companies were acting irresponsibly by rapidly releasing AI models as open-source tools.
He questioned the efficacy of LLMs' safety protocols, adding that he was "not convinced" that regulators were equipped to enforce the testing and verification of these methods.
"The factor to consider here is how much we want to regulate, and whether that will stifle innovation," Burton said.
"The markets, in my view, shouldn't override safety and security, and I think - at the moment - that is what is happening."