Instagram Tries Using AI to Determine If Teens Are Pretending to Be Adults

In this photo illustration, a person looks at a smart phone with a Instagram logo displayed on the screen, on August 17, 2021, in Arlington, Virginia. (AFP)
In this photo illustration, a person looks at a smart phone with a Instagram logo displayed on the screen, on August 17, 2021, in Arlington, Virginia. (AFP)
TT
20

Instagram Tries Using AI to Determine If Teens Are Pretending to Be Adults

In this photo illustration, a person looks at a smart phone with a Instagram logo displayed on the screen, on August 17, 2021, in Arlington, Virginia. (AFP)
In this photo illustration, a person looks at a smart phone with a Instagram logo displayed on the screen, on August 17, 2021, in Arlington, Virginia. (AFP)

Instagram is beginning to test the use of artificial intelligence to determine if kids are lying about their ages on the app, parent company Meta Platforms said on Monday.

Meta has been using AI to determine people's ages for some time, the company said, but photo and video-sharing app will now “proactively” look for teen accounts it suspects belong to teenagers even if they entered an inaccurate birthdate when they signed up.

If it is determined that a user is misrepresenting their age, the account will automatically become a teen account, which has more restrictions than an adult account. Teen accounts are private by default. Private messages are restricted so teens can only receive them from people they follow or are already connected to.

“Sensitive content,” such as videos of people fighting or those promoting cosmetic procedures, will be limited, Meta said. Teens will also get notifications if they are on Instagram for more than 60 minutes and a “sleep mode” will be enabled that turns off notifications and sends auto-replies to direct messages from 10 pm until 7 am.

Meta says it trains its AI to look for signals, such as the type of content the account interacts, profile information and when the account was created, to determine the owner's age.

The heightened measures arrive as social media companies face increased scrutiny over how their platform affects the mental health and well-being of younger users. A growing number of states are also trying to pass age verification laws, although they have faced court challenges.

Meta and other social media companies support putting the onus on app stores to verify ages amid criticism that they don’t do enough to make their products safe for children — or verify that no kids under 13 use them.

Instagram will also send notifications to parents “with information about how they can have conversations with their teens on the importance of providing the correct age online,” the company said.



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.