Microsoft Stops Selling Emotion-Reading Tech, Limits Face Recognition

The Microsoft company logo is displayed at their offices in Sydney, on Feb. 3, 2021. (AP)
The Microsoft company logo is displayed at their offices in Sydney, on Feb. 3, 2021. (AP)
TT
20

Microsoft Stops Selling Emotion-Reading Tech, Limits Face Recognition

The Microsoft company logo is displayed at their offices in Sydney, on Feb. 3, 2021. (AP)
The Microsoft company logo is displayed at their offices in Sydney, on Feb. 3, 2021. (AP)

Microsoft Corp on Tuesday said it would stop selling technology that guesses someone's emotion based on a facial image and would no longer provide unfettered access to facial recognition technology.

The actions reflect efforts by leading cloud providers to rein in sensitive technologies on their own as lawmakers in the United States and Europe continue to weigh comprehensive legal limits.

Since at least last year, Microsoft has been reviewing whether emotion recognition systems are rooted in science.

"These efforts raised important questions about privacy, the lack of consensus on a definition of 'emotions,' and the inability to generalize the linkage between facial expression and emotional state across use cases, regions, and demographics," Sarah Bird, principal group product manager at Microsoft's Azure AI unit, said in a blog post.

Existing customers will have one year before losing access to artificial intelligence tools that purport to infer emotion, gender, age, smile, facial hair, hair and makeup.

Alphabet Inc's Google Cloud last year embarked on a similar evaluation, first reported by Reuters. Google blocked 13 planned emotions from its tool for reading emotion and placed under review four existing ones, such as joy and sorrow. It was weighing a new system that would describe movements such as frowning and smiling, without seeking to attach them to an emotion.

Google did not immediately respond to request for comment on Tuesday.

Microsoft also said customers now must obtain approval to use its facial recognition services, which can enable people to log into websites or open locked doors through a face scan.

The company called on clients to avoid situations that infringe on privacy or in which the technology might struggle, such as identifying minors, but did not explicitly ban those uses.



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.