Instagram Tries Using AI to Determine If Teens Are Pretending to Be Adults

In this photo illustration, a person looks at a smart phone with a Instagram logo displayed on the screen, on August 17, 2021, in Arlington, Virginia. (AFP)
In this photo illustration, a person looks at a smart phone with a Instagram logo displayed on the screen, on August 17, 2021, in Arlington, Virginia. (AFP)
TT
20

Instagram Tries Using AI to Determine If Teens Are Pretending to Be Adults

In this photo illustration, a person looks at a smart phone with a Instagram logo displayed on the screen, on August 17, 2021, in Arlington, Virginia. (AFP)
In this photo illustration, a person looks at a smart phone with a Instagram logo displayed on the screen, on August 17, 2021, in Arlington, Virginia. (AFP)

Instagram is beginning to test the use of artificial intelligence to determine if kids are lying about their ages on the app, parent company Meta Platforms said on Monday.

Meta has been using AI to determine people's ages for some time, the company said, but photo and video-sharing app will now “proactively” look for teen accounts it suspects belong to teenagers even if they entered an inaccurate birthdate when they signed up.

If it is determined that a user is misrepresenting their age, the account will automatically become a teen account, which has more restrictions than an adult account. Teen accounts are private by default. Private messages are restricted so teens can only receive them from people they follow or are already connected to.

“Sensitive content,” such as videos of people fighting or those promoting cosmetic procedures, will be limited, Meta said. Teens will also get notifications if they are on Instagram for more than 60 minutes and a “sleep mode” will be enabled that turns off notifications and sends auto-replies to direct messages from 10 pm until 7 am.

Meta says it trains its AI to look for signals, such as the type of content the account interacts, profile information and when the account was created, to determine the owner's age.

The heightened measures arrive as social media companies face increased scrutiny over how their platform affects the mental health and well-being of younger users. A growing number of states are also trying to pass age verification laws, although they have faced court challenges.

Meta and other social media companies support putting the onus on app stores to verify ages amid criticism that they don’t do enough to make their products safe for children — or verify that no kids under 13 use them.

Instagram will also send notifications to parents “with information about how they can have conversations with their teens on the importance of providing the correct age online,” the company said.



Reddit Sues AI Giant Anthropic Over Content Use

Dario Amodei, co-founder and CEO of Anthropic. JULIEN DE ROSA / AFP
Dario Amodei, co-founder and CEO of Anthropic. JULIEN DE ROSA / AFP
TT
20

Reddit Sues AI Giant Anthropic Over Content Use

Dario Amodei, co-founder and CEO of Anthropic. JULIEN DE ROSA / AFP
Dario Amodei, co-founder and CEO of Anthropic. JULIEN DE ROSA / AFP

Social media outlet Reddit filed a lawsuit Wednesday against artificial intelligence company Anthropic, accusing the startup of illegally scraping millions of user comments to train its Claude chatbot without permission or compensation.

The lawsuit in a California state court represents the latest front in the growing battle between content providers and AI companies over the use of data to train increasingly sophisticated language models that power the generative AI revolution.

Anthropic, valued at $61.5 billion and heavily backed by Amazon, was founded in 2021 by former executives from OpenAI, the creator of ChatGPT.

The company, known for its Claude chatbot and AI models, positions itself as focused on AI safety and responsible development.

"This case is about the two faces of Anthropic: the public face that attempts to ingratiate itself into the consumer's consciousness with claims of righteousness and respect for boundaries and the law, and the private face that ignores any rules that interfere with its attempts to further line its pockets," the suit said.

According to the complaint, Anthropic has been training its models on Reddit content since at least December 2021, with CEO Dario Amodei co-authoring research papers that specifically identified high-quality content for data training.

The lawsuit alleges that despite Anthropic's public claims that it had blocked its bots from accessing Reddit, the company's automated systems continued to harvest Reddit's servers more than 100,000 times in subsequent months.

Reddit is seeking monetary damages and a court injunction to force Anthropic to comply with its user agreement terms. The company has requested a jury trial.

In an email to AFP, Anthropic said "We disagree with Reddit's claims and will defend ourselves vigorously."

Reddit has entered into licensing agreements with other AI giants including Google and OpenAI, which allow those companies to use Reddit content under terms that protect user privacy and provide compensation to the platform.

Those deals have helped lift Reddit's share price since it went public in 2024.

Reddit shares closed up more than six percent on Wednesday following news of the lawsuit.

Musicians, book authors, visual artists and news publications have sued the various AI companies that used their data without permission or payment.

AI companies generally defend their practices by claiming fair use, arguing that training AI on large datasets fundamentally changes the original content and is necessary for innovation.

Though most of these lawsuits are still in early stages, their outcomes could have a profound effect on the shape of the AI industry.