Wikipedia Launches New Global Rules to Combat Site Abuses

The foundation that operates Wikipedia will launch its first global code of conduct. (Reuters file photo)
The foundation that operates Wikipedia will launch its first global code of conduct. (Reuters file photo)
TT
20

Wikipedia Launches New Global Rules to Combat Site Abuses

The foundation that operates Wikipedia will launch its first global code of conduct. (Reuters file photo)
The foundation that operates Wikipedia will launch its first global code of conduct. (Reuters file photo)

The foundation that operates Wikipedia will launch its first global code of conduct on Tuesday, seeking to address criticism that it has failed to combat harassment and suffers from a lack of diversity.

“We need to be much more inclusive,” said María Sefidari, the chair of board of trustees for the non-profit Wikimedia Foundation. “We are missing a lot of voices, we’re missing women, we’re missing marginalized groups.”

Online platforms have come under intense scrutiny for abusive behavior, violent rhetoric and other forms of problematic content, pushing them to revamp content rules and more strictly enforce them.

Unlike Facebook Inc and Twitter Inc which take more top-down approaches to content moderation, the online encyclopedia, which turned 20 years old last month, largely relies on unpaid volunteers to handle issues around users’ behavior.

Wikimedia said more than 1,500 Wikipedia volunteers from five continents and 30 languages participated in the creation of the new rules after the board of trustees voted in May last year to develop new binding standards.

“There’s been a process of change throughout the communities,” Katherine Maher, the executive director of the Wikimedia Foundation, said in an interview with Reuters. “It took time to build the support that was necessary to do the consultations for people to understand why this is a priority.”

The new code of conduct bans harassment on and off the site, barring behaviors like hate speech, the use of slurs, stereotypes or attacks based on personal characteristics, as well as threats of physical violence and “hounding,” or following someone across different articles to critique their work.

It also bans deliberately introducing false or biased information into content. Wikipedia is a relatively trusted site compared to major social media platforms which have struggled to curb misinformation.

Maher said some users’ concerns that the new rules meant the site was becoming more centralized were unfounded.

Wikipedia has 230,000 volunteer editors who work on crowdsourced articles and more than 3,500 “administrators” who can take actions like blocking accounts or restricting edits on certain pages. Sometimes, complaints are decided on by panels of users elected by the communities.

Wikimedia said the next phase of the project would be working on the rules’ enforcement.

“A code of conduct without enforcement…is not going to be useful,” said Sefidari. “We’re going to figure this out with the communities,” she said.

Maher said there would be training for communities and interested task-forces of users.

Wikimedia has no immediate plans to beef up its small “trust and safety” team, a group of about a dozen staff which currently acts on urgent matters such as death threats or the sharing of people’s private information, she said.



Anthropic Says Looking to Power European Tech with Hiring Push

As the AI race heats up, so does the race to find talent in the sector, which is currently dominated by US and Chinese companies. Fabrice COFFRINI / AFP/File
As the AI race heats up, so does the race to find talent in the sector, which is currently dominated by US and Chinese companies. Fabrice COFFRINI / AFP/File
TT
20

Anthropic Says Looking to Power European Tech with Hiring Push

As the AI race heats up, so does the race to find talent in the sector, which is currently dominated by US and Chinese companies. Fabrice COFFRINI / AFP/File
As the AI race heats up, so does the race to find talent in the sector, which is currently dominated by US and Chinese companies. Fabrice COFFRINI / AFP/File

American AI giant Anthropic aims to boost the European tech ecosystem as it expands on the continent, product chief Mike Krieger told AFP Thursday at the Vivatech trade fair in Paris.

The OpenAI competitor wants to be "the engine behind some of the largest startups of tomorrow... (and) many of them can and should come from Europe", Krieger said.

Tech industry and political leaders have often lamented Europe's failure to capitalize on its research and education strength to build heavyweight local companies -- with many young founders instead leaving to set up shop across the Atlantic.

Krieger's praise for the region's "really strong talent pipeline" chimed with an air of continental tech optimism at Vivatech.

French AI startup Mistral on Wednesday announced a multibillion-dollar tie-up to bring high-powered computing resources from chip behemoth Nvidia to the region.

The semiconductor firm will "increase the amount of AI computing capacity in Europe by a factor of 10" within two years, Nvidia boss Jensen Huang told an audience at the southern Paris convention center.

Among 100 planned continental hires, Anthropic is building up its technical and research strength in Europe, where it has offices in Dublin and non-EU capital London, Krieger said.

Beyond the startups he hopes to boost, many long-standing European companies "have a really strong appetite for transforming themselves with AI", he added, citing luxury giant LVMH, which had a large footprint at Vivatech.

'Safe by design'

Mistral -- founded only in 2023 and far smaller than American industry leaders like OpenAI and Anthropic -- is nevertheless "definitely in the conversation" in the industry, Krieger said.

The French firm recently followed in the footsteps of the US companies by releasing a so-called "reasoning" model able to take on more complex tasks.

"I talk to customers all the time that are maybe using (Anthropic's AI) Claude for some of the long-horizon agentic tasks, but then they've also fine-tuned Mistral for one of their data processing tasks, and I think they can co-exist in that way," Krieger said.

So-called "agentic" AI models -- including the most recent versions of Claude -- work as autonomous or semi-autonomous agents that are able to do work over longer horizons with less human supervision, including by interacting with tools like web browsers and email.

Capabilities displayed by the latest releases have raised fears among some researchers, such as University of Montreal professor and "AI godfather" Yoshua Bengio, that independently acting AI could soon pose a risk to humanity.

Bengio last week launched a non-profit, LawZero, to develop "safe-by-design" AI -- originally a key founding promise of OpenAI and Anthropic.

'Very specific genius'

"A huge part of why I joined Anthropic was because of how seriously they were taking that question" of AI safety, said Krieger, a Brazilian software engineer who co-founded Instagram, which he left in 2018.

Anthropic is still working on measures designed to restrict their AI models' potential to do harm, he added.

But it has yet to release details of its "level 4" AI safety protections foreseen for still more powerful models, after activating ASL (AI Safety Level) 3 to corral the capabilities of May's Claude Opus 4 release.

Developing ASL 4 is "an active part of the work of the company", Krieger said, without giving a potential release date.

With Claude 4 Opus, "we've deployed the mitigations kind of proactively... safe doesn't have to mean slow, but it does mean having to be thoughtful and proactive ahead of time" to make sure safety protections don't impair performance, he added.

Looking to upcoming releases from Anthropic, Krieger said the company's models were on track to match chief executive Dario Amodei's prediction that Anthropic would offer customers access to a "country of geniuses in a data center" by 2026 or 2027 -- within limits.

Anthropic's latest AI models are "genius-level at some very specific things", he said.

"In the coming year... it will continue to spike in particular aspects of things, and still need a lot of human-in-the-loop coordination," he forecast.