AI-Generated Faces More Trustworthy than Real Ones, New Study

A green wireframe model covers an actor's lower face during the creation of a synthetic facial reanimation video, known alternatively as a deepfake, in London, Britain February 12, 2019. Reuters TV via REUTERS/File Photo
A green wireframe model covers an actor's lower face during the creation of a synthetic facial reanimation video, known alternatively as a deepfake, in London, Britain February 12, 2019. Reuters TV via REUTERS/File Photo
TT
20

AI-Generated Faces More Trustworthy than Real Ones, New Study

A green wireframe model covers an actor's lower face during the creation of a synthetic facial reanimation video, known alternatively as a deepfake, in London, Britain February 12, 2019. Reuters TV via REUTERS/File Photo
A green wireframe model covers an actor's lower face during the creation of a synthetic facial reanimation video, known alternatively as a deepfake, in London, Britain February 12, 2019. Reuters TV via REUTERS/File Photo

People cannot distinguish between a face generated by Artificial Intelligence – using StyleGAN2- and a real face, according to a study published in the journal Proceedings of The National Academy of Science.

Dr. Sophie Nightingale from Lancaster University and Professor Hany Farid from the University of California conducted experiments in which participants were asked to distinguish state of the art StyleGAN2 synthesized faces from real faces and what level of trust the faces evoked.

The results revealed that synthetically generated faces are not only highly photo realistic, but nearly indistinguishable from real faces and are even judged to be more trustworthy. The researchers warn of the implications of people’s inability to identify AI-generated images.
In the first experiment, 315 participants classified 128 faces taken from a set of 800 as either real or synthesized. Their accuracy rate was 48 percent.

In a second experiment, 219 new participants were trained and given feedback on how to classify faces. They classified 128 faces taken from the same set of 800 faces as in the first experiment – but despite their training, the accuracy rate only improved to 59 percent. The researchers decided to find out if perceptions of trustworthiness could help people identify artificial images.

A third study asked 223 participants to rate the trustworthiness of 128 faces taken the same set of 800 faces on a scale of 1 (very untrustworthy) to 7 (very trustworthy).

The average rating for synthetic faces was 7.7 percent more trustworthy than the average rating for real faces which is statistically significant.

“Perhaps most interestingly, we find that synthetically-generated faces are more trustworthy than real faces,” said Nightingale in a report.

To protect the public from “deep fakes”, Nightingale proposed guidelines for the creation and distribution of synthesized images. Safeguards could include, for example, incorporating robust watermarks into the image- and video-synthesis networks that would provide a downstream mechanism for reliable identification.



DeepSeek Available to Download Again in South Korea After Suspension 

The DeepSeek logo is seen on January 29, 2025. (Reuters)
The DeepSeek logo is seen on January 29, 2025. (Reuters)
TT
20

DeepSeek Available to Download Again in South Korea After Suspension 

The DeepSeek logo is seen on January 29, 2025. (Reuters)
The DeepSeek logo is seen on January 29, 2025. (Reuters)

Chinese artificial intelligence service DeepSeek became available again on South Korean app markets on Monday for the first time in about two months, when downloads were suspended after authorities cited breaches in data protection rules.

South Korea's Personal Information Protection Commission said on Thursday that DeepSeek transferred user data and prompts without permission when the service first launched in South Korea in January.

Downloading the app was suspended in February after the questions over personal data protection surfaced, but the service was available for download again on South Korea's app market including via Apple's App Store and Google Play Store.

"We process your personal information in compliance with the Personal Information Protection Act of Korea," DeepSeek said in a revised privacy policy note applied to the app.

DeepSeek said users had the option to refuse to allow the transfer of personal information to a number of companies in China and the United States.

DeepSeek did not immediately respond to a request for comment on Monday.

South Korea's data protection agency said DeepSeek had voluntarily decided to make the app available for download, which it is free to do after at least partially reflecting its recommendations.