Facebook to Shut Down Face-recognition System, Delete Data

Photo: REUTERS
Photo: REUTERS
TT

Facebook to Shut Down Face-recognition System, Delete Data

Photo: REUTERS
Photo: REUTERS

Facebook said it will shut down its face-recognition system and delete the faceprints of more than 1 billion people amid growing concerns about the technology and its misuse by governments, police and others.

“This change will represent one of the largest shifts in facial recognition usage in the technology’s history,” Jerome Pesenti, vice president of artificial intelligence for Facebook’s new parent company, Meta, wrote in a blog post on Tuesday.

He said the company was trying to weigh the positive use cases for the technology “against growing societal concerns, especially as regulators have yet to provide clear rules.” The company in the coming weeks will delete “more than a billion people’s individual facial recognition templates," he said.

Facebook’s about-face follows a busy few weeks. On Thursday it announced its new name Meta for Facebook the company, but not the social network. The change, it said, will help it focus on building technology for what it envisions as the next iteration of the internet -- the “metaverse.”

The company is also facing perhaps its biggest public relations crisis to date after leaked documents from whistleblower Frances Haugen showed that it has known about the harms its products cause and often did little or nothing to mitigate them.

More than a third of Facebook’s daily active users have opted in to have their faces recognized by the social network’s system. That’s about 640 million people. Facebook introduced facial recognition more than a decade ago but gradually made it easier to opt out of the feature as it faced scrutiny from courts and regulators.

Facebook in 2019 stopped automatically recognizing people in photos and suggesting people “tag" them, and instead of making that the default, asked users to choose if they wanted to use its facial recognition feature.

Facebook's decision to shut down its system “is a good example of trying to make product decisions that are good for the user and the company,” said Kristen Martin, a professor of technology ethics at the University of Notre Dame. She added that the move also demonstrates the power of public and regulatory pressure, since the face recognition system has been the subject of harsh criticism for over a decade.

Meta Platforms Inc., Facebook's parent company, appears to be looking at new forms of identifying people. Pesenti said Tuesday's announcement involves a “company-wide move away from this kind of broad identification, and toward narrower forms of personal authentication."

“Facial recognition can be particularly valuable when the technology operates privately on a person’s own devices," he wrote. “This method of on-device facial recognition, requiring no communication of face data with an external server, is most commonly deployed today in the systems used to unlock smartphones."
Apple uses this kind of technology to power its Face ID system for unlocking iPhones.

Researchers and privacy activists have spent years raising questions about the tech industry's use of face-scanning software, citing studies that found it worked unevenly across boundaries of race, gender or age. One concern has been that the technology can incorrectly identify people with darker skin.

Another problem with face recognition is that in order to use it, companies have had to create unique faceprints of huge numbers of people – often without their consent and in ways that can be used to fuel systems that track people, said Nathan Wessler of the American Civil Liberties Union, which has fought Facebook and other companies over their use of the technology.

“This is a tremendously significant recognition that this technology is inherently dangerous,” he said.

Facebook found itself on the other end of the debate last year when it demanded that facial recognition startup ClearviewAI, which works with police, stop harvesting Facebook and Instagram user images to identify the people in them.

Concerns also have grown because of increasing awareness of the Chinese government’s extensive video surveillance system, especially as it’s been employed in a region home to one of China’s largely Muslim ethnic minority populations.

Facebook’s huge repository of images shared by users helped make it a powerhouse for improvements in computer vision, a branch of artificial intelligence. Now many of those research teams have been refocused on Meta’s ambitions for augmented reality technology, in which the company envisions future users strapping on goggles to experience a blend of virtual and physical worlds. Those technologies, in turn, could pose new concerns about how people’s biometric data is collected and tracked.

Facebook didn’t provide clear answers when asked how people could verify that their image data was deleted and what the company would be doing with its underlying face-recognition technology.

On the first point, company spokesperson Jason Grosse said in email only that user templates will be “marked for deletion” if their face-recognition settings are on, and that the deletion process should be completed and verified in “coming weeks.” On the second, point, Grosse said that Facebook will be “turning off” components of the system associated with the face-recognition settings.

Meta’s newly wary approach to facial recognition follows decisions by other US tech giants such as Amazon, Microsoft and IBM last year to end or pause their sales of facial recognition software to police, citing concerns about false identifications and amid a broader US reckoning over policing and racial injustice.

At least seven US states and nearly two dozen cities have limited government use of the technology amid fears over civil rights violations, racial bias and invasion of privacy.

President Joe Biden’s science and technology office in October launched a fact-finding mission to look at facial recognition and other biometric tools used to identify people or assess their emotional or mental states and character. European regulators and lawmakers have also taken steps toward blocking law enforcement from scanning facial features in public spaces.

Facebook’s face-scanning practices also contributed to the $5 billion fine and privacy restrictions the Federal Trade Commission imposed on the company in 2019. Facebook’s settlement with the FTC included a promise to require “clear and conspicuous” notice before people’s photos and videos were subjected to facial recognition technology.

And the company earlier this year agreed to pay $650 million to settle a 2015 lawsuit alleging it violated an Illinois privacy law when it used photo-tagging without users’ permission.

“It is a big deal, it’s a big shift but it’s also far, far too late,” said John Davisson, senior counsel at the Electronic Privacy Information Center. EPIC filed its first complaint with the FTC against Facebook’s facial recognition service in 2011, the year after it was rolled out.



SVC Develops AI Intelligence Platform to Strengthen Private Capital Ecosystem

The platform offers customizable analytical dashboards that deliver frequent updates and predictive insights- SPA
The platform offers customizable analytical dashboards that deliver frequent updates and predictive insights- SPA
TT

SVC Develops AI Intelligence Platform to Strengthen Private Capital Ecosystem

The platform offers customizable analytical dashboards that deliver frequent updates and predictive insights- SPA
The platform offers customizable analytical dashboards that deliver frequent updates and predictive insights- SPA

Saudi Venture Capital Company (SVC) announced the launch of its proprietary intelligence platform, Aian, developed in-house using Saudi national expertise to enhance its institutional role in developing the Kingdom’s private capital ecosystem and supporting its mandate as a market maker guided by data-driven growth principles.

According to a press release issued by the SVC today, Aian is a custom-built AI-powered market intelligence capability that transforms SVC’s accumulated institutional expertise and detailed private market data into structured, actionable insights on market dynamics, sector evolution, and capital formation. The platform converts institutional memory into compounding intelligence, enabling decisions that integrate both current market signals and long-term historical trends, SPA reported.

Deputy CEO and Chief Investment Officer Nora Alsarhan stated that as Saudi Arabia’s private capital market expands, clarity, transparency, and data integrity become as critical as capital itself. She noted that Aian represents a new layer of national market infrastructure, strengthening institutional confidence, enabling evidence-based decision-making, and supporting sustainable growth.

By transforming data into actionable intelligence, she said, the platform reinforces the Kingdom’s position as a leading regional private capital hub under Vision 2030.

She added that market making extends beyond capital deployment to shaping the conditions under which capital flows efficiently, emphasizing that the next phase of market development will be driven by intelligence and analytical insight alongside investment.

Through Aian, SVC is building the knowledge backbone of Saudi Arabia’s private capital ecosystem, enabling clearer visibility, greater precision in decision-making, and capital formation guided by insight rather than assumption.

Chief Strategy Officer Athary Almubarak said that in private capital markets, access to reliable insight increasingly represents the primary constraint, particularly in emerging and fast-scaling markets where disclosures vary and institutional knowledge is fragmented.

She explained that for development-focused investment institutions, inconsistent data presents a structural challenge that directly impacts capital allocation efficiency and the ability to crowd in private investment at scale.

She noted that SVC was established to address such market frictions and that, as a government-backed investor with an explicit market-making mandate, its role extends beyond financing to building the enabling environment in which private capital can grow sustainably.

By integrating SVC’s proprietary portfolio data with selected external market sources, Aian enables continuous consolidation and validation of market activity, producing a dynamic representation of capital deployment over time rather than relying solely on static reporting.

The platform offers customizable analytical dashboards that deliver frequent updates and predictive insights, enabling SVC to identify priority market gaps, recalibrate capital allocation, design targeted ecosystem interventions, and anchor policy dialogue in evidence.

The release added that Aian also features predictive analytics capabilities that anticipate upcoming funding activity, including projected investment rounds and estimated ticket sizes. In addition, it incorporates institutional benchmarking tools that enable structured comparisons across peers, sectors, and interventions, supporting more precise, data-driven ecosystem development.


Job Threats, Rogue Bots: Five Hot Issues in AI

A Delhi police officer outside the venue of the 'India AI Impact Summit 2026'. Arun SANKAR / AFP
A Delhi police officer outside the venue of the 'India AI Impact Summit 2026'. Arun SANKAR / AFP
TT

Job Threats, Rogue Bots: Five Hot Issues in AI

A Delhi police officer outside the venue of the 'India AI Impact Summit 2026'. Arun SANKAR / AFP
A Delhi police officer outside the venue of the 'India AI Impact Summit 2026'. Arun SANKAR / AFP

As artificial intelligence evolves at a blistering pace, world leaders and thousands of other delegates will discuss how to handle the technology at the AI Impact Summit, which opens Monday in New Delhi.

Here are five big issues on the agenda:

Job loss fears

Generative AI threatens to disrupt myriad industries, from software development and factory work to music and the movies.

India -- with its large customer service and tech support sectors -- could be vulnerable, and shares in the country's outsourcing firms have plunged in recent days, partly due to advances in AI assistant tools.

"Automation, intelligent systems, and data-driven processes are increasingly taking over routine and repetitive tasks, reshaping traditional job structures," the summit's "human capital" working group says.

"While these developments can drive efficiency and innovation, they also risk displacing segments of the workforce," widening socio-economic divides, it warns.

Bad robots

The Delhi summit is the fourth in a series of international AI meetings. The first in 2023 was called the AI Safety Summit, and preventing real-world harm is still a key goal.

In the United States, families of people who have taken their own lives have sued OpenAI, accusing ChatGPT of having contributed to the suicides. The company says it has made efforts to strengthen its safeguards.

Elon Musk's Grok AI tool also recently sparked global outrage and bans in several countries over its ability to create sexualized deepfakes depicting real people, including children, in skimpy clothing.

Other concerns range from copyright violations to scammers using AI tools to produce perfectly spelled phishing emails.

Energy demands

Tech giants are spending hundreds of billions of dollars on AI infrastructure, building data centers packed with cutting-edge microchips, and also, in some cases, nuclear plants to power them.

The International Energy Agency projects that electricity consumption from data centers will double by 2030, fueled by the AI boom.

In 2024, data centers accounted for an estimated 1.5 percent of global electricity consumption, it says.

Alongside concerns over planet-warming carbon emissions are worries about water use to cool the data centers servers, which can lead to shortages on hot days.

Moves to regulate

In South Korea, a wide-ranging law regulating artificial intelligence took effect in January, requiring companies to tell users when products use generative AI.

Many countries are planning similar moves, despite a warning from US Vice President JD Vance last year against "excessive regulation" that could stifle innovation.

The European Union's Artificial Intelligence Act allows regulators to ban AI systems deemed to pose "unacceptable risks" to society.

That could include identifying people in real time in public spaces or evaluating criminal risk based on biometric data alone.

'Everyone dies'

More existential fears have also been expressed by AI insiders who believe the technology is marching towards so-called "Artificial General Intelligence", when machines' abilities match those of humans.

OpenAI and rival startup Anthropic have seen public resignations of staff members who have spoken out about the ethical implications of their technology.

Anthropic warned last week that its latest chatbot models could be nudged towards "knowingly supporting -- in small ways -- efforts toward chemical weapon development and other heinous crimes".

Researcher Eliezer Yudkowsky, author of the 2025 book "If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All" has also compared AI to the development of nuclear weapons.


OpenAI Hires Creator of 'OpenClaw' AI Agent Tool

FILE PHOTO: OpenAI logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo
TT

OpenAI Hires Creator of 'OpenClaw' AI Agent Tool

FILE PHOTO: OpenAI logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI has hired the Austrian creator of OpenClaw, an artificial intelligence tool able to execute real-world tasks, the US startup's head Sam Altman said on Sunday.

AI agent tool OpenClaw has fascinated -- and spooked -- the tech world since researcher Peter Steinberger built it in November to help organize his digital life.

A Reddit-like pseudo social network for OpenClaw agents called Moltbook, where chatbots converse, has also grabbed headlines and provoked soul-searching over AI.

Elon Musk called Moltbook "the very early stages of the singularity" -- a term for the moment when human intelligence is overwhelmed by AI forever, although some people have questioned to what extent humans are manipulating the bots' posts.

Steinberger "is joining OpenAI to drive the next generation of personal agents," Altman wrote in an X post.

"He is a genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people," AFP quoted him as saying.

"We expect this will quickly become core to our product offerings," Altman wrote, saying that OpenClaw would remain an open-source project within a foundation supported by OpenAI.

"The future is going to be extremely multi-agent and it's important to us to support open source as part of that."

Users of OpenClaw download the tool, and connect it to generative AI models such as ChatGPT.

They then communicate with their AI agent through WhatsApp or Telegram, as they would with a friend or colleague.

Many users gush over the tool's futuristic abilities to send emails and buy things online, but others report an overall chaotic experience with added cybersecurity risks.

Only a small percentage of OpenAI's nearly one billion users pay for subscription services, putting pressure on the company to find new revenue sources.

It has begun testing advertisements and sponsored content in the massively popular ChatGPT, spawning privacy concerns as it looks for ways to start balancing its hundreds of billions in spending commitments.