Human rights groups have urged video-conferencing company Zoom to scrap research on integrating emotion recognition tools into its products, saying the technology can infringe users' privacy and perpetuate discrimination.
Technology publication Protocol reported last month that California-based Zoom was looking into building such tools, which could use artificial intelligence (AI) to scan facial movements and speech to draw conclusions about people's mood.
In a joint letter sent to Zoom Chief Executive Eric Yuan on Wednesday, more than 25 rights groups including Access Now, the American Civil Liberties Union (ACLU) and the Muslim Justice League said the technology was inaccurate and could threaten basic rights.
"If Zoom advances with these plans, this feature will discriminate against people of certain ethnicities and people with disabilities, hardcoding stereotypes into millions of devices," said Caitlin Seeley George, director of campaign and operations at Fight for the Future, a digital rights group.
"Beyond mining users for profit and allowing businesses to capitalize on them, this technology could take on far more sinister and punitive uses," George said.
Zoom did not immediately respond to a request for comment.
Zoom Video Communications Inc emerged as a major video conferencing platform around the world during COVID-19 lockdowns as education and work shifted online, reporting more than 200 million daily users at the height of the pandemic in 2020.
The company has already built tools that purport to analyze the sentiment of meetings based on text transcripts of video calls, and according to Protocol it also plans to explore more advanced emotion reading tools across its products.
In a blog post describing the sentiment analysis technology, Zoom said its tools can measure the "emotional tone of the conversations" in order to help sales people improve their pitches.
But the rights groups' letter said rolling out emotional recognition analysis for video calls would trample users' rights.
"This move to mine users for emotional data points based on the false idea that AI can track and analyze human emotions is a violation of privacy and human rights," said the letter, a copy of which was sent to the Thomson Reuters Foundation.
"Zoom needs to halt plans to advance this feature," it added.
From classrooms to job interviews and in public places, emotional recognition tools are increasingly common, despite questions about their accuracy and human rights implications.
Critics of the technology often draw parallels to facial recognition technologies, which have been shown to have high error rates on non-white faces, and have led to wrongful arrests.
Esha Bhandari, deputy director of the ACLU Speech, Privacy, and Technology Project, called emotion AI "a junk science."
"There is no good reason for Zoom to mine its users' facial expressions, vocal tones, and eye movements to develop this creepy technology," she said in emailed comments.