OpenAI Assigns New Project to AI Safety Leader Madry in Revamp

The OpenAI logo is displayed on a cell phone with an image on a computer monitor generated by ChatGPT's Dall-E text-to-image model, Dec. 8, 2023, in Boston. (AP)
The OpenAI logo is displayed on a cell phone with an image on a computer monitor generated by ChatGPT's Dall-E text-to-image model, Dec. 8, 2023, in Boston. (AP)
TT

OpenAI Assigns New Project to AI Safety Leader Madry in Revamp

The OpenAI logo is displayed on a cell phone with an image on a computer monitor generated by ChatGPT's Dall-E text-to-image model, Dec. 8, 2023, in Boston. (AP)
The OpenAI logo is displayed on a cell phone with an image on a computer monitor generated by ChatGPT's Dall-E text-to-image model, Dec. 8, 2023, in Boston. (AP)

OpenAI Chief Executive Sam Altman said on Tuesday the ChatGPT maker's AI safety leader Aleksander Madry was working on a new research project, as the startup rejigs the preparedness team.

"Aleksander is working on a new and v(very) important research project," Altman said in a post on X, adding that OpenAI executives Joaquin Quinonero Candela and Lilian Weng will be taking over the preparedness team in the meanwhile.

The preparedness team helps to evaluate artificial general intelligence readiness of the company's AI models, a spokesperson for OpenAI said in statement, adding Madry will take on a bigger role within the research organization following the move.

Madry did not immediately respond to requests for comment, according to Reuters.

"Joaquin and Lilian are taking over the preparedness team as part of unifying our safety work", Altman wrote in the post.

The Information, which was the first to report Madry's move, had said researcher Tejal Patwardhan will manage much of the work of the team.

The moves come when OpenAI's chatbots, which can engage in human-like conversations and create videos and images based on text prompts, have become increasingly powerful and have stirred safety concerns.

Earlier this year, the Microsoft-backed company formed a Safety and Security Committee to be led by board members, including CEO Sam Altman, in the run-up to its training of its next artificial intelligence model.



Meta Abruptly Ends US Fact-checks Ahead of Trump Term

Attendees visit the Meta booth at the Game Developers Conference in San Francisco on March 22, 2023. (AP)
Attendees visit the Meta booth at the Game Developers Conference in San Francisco on March 22, 2023. (AP)
TT

Meta Abruptly Ends US Fact-checks Ahead of Trump Term

Attendees visit the Meta booth at the Game Developers Conference in San Francisco on March 22, 2023. (AP)
Attendees visit the Meta booth at the Game Developers Conference in San Francisco on March 22, 2023. (AP)

Social media giant Meta on Tuesday slashed its content moderation policies, including ending its US fact-checking program on Facebook and Instagram, in a major shift that conforms with the priorities of incoming president Donald Trump.

"We're going to get rid of fact-checkers (that) have just been too politically biased and have destroyed more trust than they've created, especially in the US," Meta founder and CEO Mark Zuckerberg said in a post.

Instead, Meta platforms including Facebook and Instagram, "would use community notes similar to X (formerly Twitter), starting in the US," he added.

Meta's surprise announcement echoed long-standing complaints made by Trump's Republican Party and X owner Elon Musk about fact-checking that many conservatives see as censorship.

They argue that fact-checking programs disproportionately target right-wing voices, which has led to proposed laws in states like Florida and Texas to limit content moderation.

"This is cool," Musk posted on his X platform after the announcement.

Zuckerberg, in a nod to Trump's victory, said that "recent elections feel like a cultural tipping point towards, once again, prioritizing speech" over moderation.

The shift came as the 40-year-old tycoon has been making efforts to reconcile with Trump since his election in November, including donating one million dollars to his inauguration fund.

Trump has been a harsh critic of Meta and Zuckerberg for years, accusing the company of bias against him.

The Republican was kicked off Facebook following the January 6, 2021, attack on the US Capitol by his supporters, though the company restored his account in early 2023.

Zuckerberg, like several other tech leaders, has met with Trump at his Mar-a-Lago resort in Florida ahead of his January 20 inauguration.

Meta in recent days has taken other gestures likely to please Trump's team, such as appointing former Republican official Joel Kaplan to head up public affairs at the company.

He takes over from Nick Clegg, a former British deputy prime minister.

Zuckerberg also named Ultimate Fighting Championship (UFC) head Dana White, a close ally of Trump, to the Meta board.

Kaplan, in a statement Tuesday, insisted the company's approach to content moderation had "gone too far."

"Too much harmless content gets censored, too many people find themselves wrongly locked up in 'Facebook jail,'" he said.

As part of the overhaul, Meta said it will relocate its trust and safety teams from liberal California to more conservative Texas.

"That will help us build trust to do this work in places where there is less concern about the bias of our teams," Zuckerberg said.

Zuckerberg also took a shot at the European Union "that has an ever increasing number of laws institutionalizing censorship and making it difficult to build anything innovative there."

The remark referred to new laws in Europe that require Meta and other major platforms to maintain content moderation standards or risk hefty fines.

Zuckerberg said that Meta would "work with President Trump to push back against foreign governments going after American companies to censor more."

Additionally, Meta announced it would reverse its 2021 policy of reducing political content across its platforms.

Instead, the company will adopt a more personalized approach, allowing users greater control over the amount of political content they see on Facebook, Instagram, and Threads.

AFP currently works in 26 languages with Facebook's fact-checking program, in which Facebook pays to use fact-checks from around 80 organizations globally on its platform, WhatsApp and on Instagram.

In that program, content rated "false" is downgraded in news feeds so fewer people will see it and if someone tries to share that post, they are presented with an article explaining why it is misleading.

Community Notes on X (formerly Twitter) allows users to collaboratively add context to posts in a system that aims to distill reliable information through consensus rather than top-down moderation.

Meta's move into fact-checking came in the wake of Trump's shock election in 2016, which critics said was enabled by rampant disinformation on Facebook and interference by foreign actors like Russia on the platform.