Google Faces Internal Battle Over Research on AI to Speed Chip Design

The logo of Google is pictured during the Viva Tech start-up and technology summit in Paris, France, May 25, 2018. REUTERS/Charles Platiau
The logo of Google is pictured during the Viva Tech start-up and technology summit in Paris, France, May 25, 2018. REUTERS/Charles Platiau
TT

Google Faces Internal Battle Over Research on AI to Speed Chip Design

The logo of Google is pictured during the Viva Tech start-up and technology summit in Paris, France, May 25, 2018. REUTERS/Charles Platiau
The logo of Google is pictured during the Viva Tech start-up and technology summit in Paris, France, May 25, 2018. REUTERS/Charles Platiau

Alphabet Inc's Google said on Monday it had recently fired a senior engineering manager after colleagues, whose landmark research on artificial intelligence software he had been trying to discredit, accused him of harassing behavior.

The dispute, which stems from efforts to automate chip design, threatens to undermine the reputation of Google's research in the academic community. It also could disrupt the flow of millions of dollars in government grants for research into AI and chips.

Google's research unit has faced scrutiny since late 2020 after workers lodged open critiques about its handling of personnel complaints and publication practices.

The new episode emerged after the scientific journal Nature in June published "A graph placement methodology for fast chip design" led by Google scientists Azalia Mirhoseini and Anna Goldie. They discovered that AI could complete a key step in the design process for chips, known as floorplanning, faster and better than an unspecified human expert, a subjective reference point.

But other Google colleagues in a paper that was anonymously posted online in March - "Stronger Baselines for Evaluating Deep Reinforcement Learning in Chip Placement” found that two alternative approaches based on basic software outperform the AI. One beat it on a well-known test, and the other on a proprietary Google rubric.

Reuters said that Google declined to comment on the leaked draft, but two workers confirmed its authenticity.

The company said it refused to publish Stronger Baselines because it did not meet its standards, and soon after fired Satrajit Chatterjee, a leading driver of the work. It declined to say why it fired him.

"It’s unfortunate that Google has taken this turn," said Laurie Burgess, an attorney for Chatterjee. "It was always his goal to have transparency about the science, and he urged over the course of two years for Google to address this."

Google researcher Goldie told the New York Times, which on Monday first reported the firing, that Chatterjee had harassed her and Mirhoseini for years by spreading misinformation about them.

Burgess denied the allegations, and added that Chatterjee did not leak Stronger Baselines.

Patrick Madden, an associate professor focused on chip design at Binghamton University who has read both papers, said he had never seen a paper before the one in Nature that lacked a good comparison point.

"It's like a reference problem: Everyone gets the same jigsaw puzzle pieces and you can compare how close you come to getting everything right," he said. "If they were to produce results on some standard benchmark and they were stellar, I would sing their praises."

Google said the comparison to a human was more relevant and that software licensing issues had prevented it from mentioning tests.

Studies by big institutions such as Google in well-known journals can have an outsized influence on whether similar projects are funded in the industry. One Google researcher said the leaked paper had unfairly opened the door to questions about the credibility of any work published by the company.

After "Stronger Baselines" emerged online, Zoubin Ghahramani, a vice president at Google Research, wrote on Twitter last month that "Google stands by this work published in Nature on ML for Chip Design, which has been independently replicated, open-sourced, and used in production at Google."

Nature, citing a UK public holiday, did not have immediate comment. Madden said he hoped Nature would revisit the publication, noting that peer reviewer notes show at least one asked for results on benchmarks.
"Somehow, that never happened," he said.



Meta Abruptly Ends US Fact-checks Ahead of Trump Term

Attendees visit the Meta booth at the Game Developers Conference in San Francisco on March 22, 2023. (AP)
Attendees visit the Meta booth at the Game Developers Conference in San Francisco on March 22, 2023. (AP)
TT

Meta Abruptly Ends US Fact-checks Ahead of Trump Term

Attendees visit the Meta booth at the Game Developers Conference in San Francisco on March 22, 2023. (AP)
Attendees visit the Meta booth at the Game Developers Conference in San Francisco on March 22, 2023. (AP)

Social media giant Meta on Tuesday slashed its content moderation policies, including ending its US fact-checking program on Facebook and Instagram, in a major shift that conforms with the priorities of incoming president Donald Trump.

"We're going to get rid of fact-checkers (that) have just been too politically biased and have destroyed more trust than they've created, especially in the US," Meta founder and CEO Mark Zuckerberg said in a post.

Instead, Meta platforms including Facebook and Instagram, "would use community notes similar to X (formerly Twitter), starting in the US," he added.

Meta's surprise announcement echoed long-standing complaints made by Trump's Republican Party and X owner Elon Musk about fact-checking that many conservatives see as censorship.

They argue that fact-checking programs disproportionately target right-wing voices, which has led to proposed laws in states like Florida and Texas to limit content moderation.

"This is cool," Musk posted on his X platform after the announcement.

Zuckerberg, in a nod to Trump's victory, said that "recent elections feel like a cultural tipping point towards, once again, prioritizing speech" over moderation.

The shift came as the 40-year-old tycoon has been making efforts to reconcile with Trump since his election in November, including donating one million dollars to his inauguration fund.

Trump has been a harsh critic of Meta and Zuckerberg for years, accusing the company of bias against him.

The Republican was kicked off Facebook following the January 6, 2021, attack on the US Capitol by his supporters, though the company restored his account in early 2023.

Zuckerberg, like several other tech leaders, has met with Trump at his Mar-a-Lago resort in Florida ahead of his January 20 inauguration.

Meta in recent days has taken other gestures likely to please Trump's team, such as appointing former Republican official Joel Kaplan to head up public affairs at the company.

He takes over from Nick Clegg, a former British deputy prime minister.

Zuckerberg also named Ultimate Fighting Championship (UFC) head Dana White, a close ally of Trump, to the Meta board.

Kaplan, in a statement Tuesday, insisted the company's approach to content moderation had "gone too far."

"Too much harmless content gets censored, too many people find themselves wrongly locked up in 'Facebook jail,'" he said.

As part of the overhaul, Meta said it will relocate its trust and safety teams from liberal California to more conservative Texas.

"That will help us build trust to do this work in places where there is less concern about the bias of our teams," Zuckerberg said.

Zuckerberg also took a shot at the European Union "that has an ever increasing number of laws institutionalizing censorship and making it difficult to build anything innovative there."

The remark referred to new laws in Europe that require Meta and other major platforms to maintain content moderation standards or risk hefty fines.

Zuckerberg said that Meta would "work with President Trump to push back against foreign governments going after American companies to censor more."

Additionally, Meta announced it would reverse its 2021 policy of reducing political content across its platforms.

Instead, the company will adopt a more personalized approach, allowing users greater control over the amount of political content they see on Facebook, Instagram, and Threads.

AFP currently works in 26 languages with Facebook's fact-checking program, in which Facebook pays to use fact-checks from around 80 organizations globally on its platform, WhatsApp and on Instagram.

In that program, content rated "false" is downgraded in news feeds so fewer people will see it and if someone tries to share that post, they are presented with an article explaining why it is misleading.

Community Notes on X (formerly Twitter) allows users to collaboratively add context to posts in a system that aims to distill reliable information through consensus rather than top-down moderation.

Meta's move into fact-checking came in the wake of Trump's shock election in 2016, which critics said was enabled by rampant disinformation on Facebook and interference by foreign actors like Russia on the platform.