Netflix Sued in $170 Mln 'Baby Reindeer' Defamation Lawsuit

]Cast members Richard Gadd and Jessica Gunning attend a photo call for the television series Baby Reindeer in Los Angeles, California, US, May 7, 2024. REUTERS/Mario Anzuoni/File Photo Purchase Licensing Rights
]Cast members Richard Gadd and Jessica Gunning attend a photo call for the television series Baby Reindeer in Los Angeles, California, US, May 7, 2024. REUTERS/Mario Anzuoni/File Photo Purchase Licensing Rights
TT

Netflix Sued in $170 Mln 'Baby Reindeer' Defamation Lawsuit

]Cast members Richard Gadd and Jessica Gunning attend a photo call for the television series Baby Reindeer in Los Angeles, California, US, May 7, 2024. REUTERS/Mario Anzuoni/File Photo Purchase Licensing Rights
]Cast members Richard Gadd and Jessica Gunning attend a photo call for the television series Baby Reindeer in Los Angeles, California, US, May 7, 2024. REUTERS/Mario Anzuoni/File Photo Purchase Licensing Rights

Netflix (NFLX.O), was sued on Thursday for at least $170 million by a Scottish woman who said she was defamed over her portrayal as a stalker in the hit mini-series "Baby Reindeer."

The plaintiff Fiona Harvey has publicly claimed to be the inspiration behind the character Martha, played by actress Jessica Gunning, who shares a physical resemblance and like her is a lawyer in London, Reuters reported.

But in a complaint filed in Los Angeles federal court, Harvey said Netflix and "Baby Reindeer" creator Richard Gadd went too far by suggesting through the show, which calls itself a "true story," that she was a twice-convicted stalker who had been sentenced to five years in prison.

Harvey denied having stalked Gadd, who on the show plays a fictional version of himself named Donny Dunn, or having been convicted or imprisoned.

But she said many people couldn't tell the difference, and thousands of Reddit and TikTok users talk about her as the "real" Martha.

"Defendants told these lies, and never stopped, because it was a better story than the truth, and better stories made money," the complaint said.

Netflix, in response to the lawsuit, said it intended to "defend this matter vigorously and to stand by Richard Gadd's right to tell his story."

The lawsuit seeks at least $50 million each for actual damages, compensatory damages including mental anguish and profits, plus at least $20 million of punitive damages.

Harvey sued two days after Netflix settled a defamation lawsuit by former prosecutor Linda Fairstein over her portrayal in "When They See Us," a 2019 series about the Central Park Five rape case three decades earlier.

Netflix agreed to move a disclaimer that some characters may have been altered for dramatic purposes to the start of episodes from the closing credits. It also agreed to donate $1 million to a nonprofit that helps free wrongfully convicted people.

The case is Harvey v Netflix Inc et al, U.S. District Court, Central District of California, No. 24-04744.

Jumpstart your morning with the latest legal news delivered straight to your inbox from The Daily Docket newsletter. Sign up here.



Monster Typhoon in the Pacific Ocean Is Bearing Down on Group of Remote US Islands

 This satellite image provided by the National Oceanographic and Atmospheric Administration (NOAA) shows Super Typhoon Sinlaku in the Pacific Ocean, Monday, April 13, 2026. (NOAA via AP)
This satellite image provided by the National Oceanographic and Atmospheric Administration (NOAA) shows Super Typhoon Sinlaku in the Pacific Ocean, Monday, April 13, 2026. (NOAA via AP)
TT

Monster Typhoon in the Pacific Ocean Is Bearing Down on Group of Remote US Islands

 This satellite image provided by the National Oceanographic and Atmospheric Administration (NOAA) shows Super Typhoon Sinlaku in the Pacific Ocean, Monday, April 13, 2026. (NOAA via AP)
This satellite image provided by the National Oceanographic and Atmospheric Administration (NOAA) shows Super Typhoon Sinlaku in the Pacific Ocean, Monday, April 13, 2026. (NOAA via AP)

A dangerous super typhoon in the Pacific Ocean is barreling toward a group of remote US islands.

Super Typhoon Sinlaku is expected to make landfall Tuesday in the Northern Mariana Islands and bring destructive winds, widespread heavy rain and flooding, the National Weather Service said Monday.

Power outages on the islands could be lengthy, forecasters warned.

Guam, a US territory with American military installations and about 170,000 residents, also could see damaging winds and is under a tropical storm warning. The US Coast Guard issued flood and high wind warnings over the weekend.

The tropical typhoon — the strongest on Earth so far this year — was producing sustained winds of 173 mph (278 kph) on Monday as it neared the islands of Rota, Tinian and Saipan, according to the Joint Typhoon Warning Center.

While it's expected to weaken slightly over the next few days, Sinlaku should cross near the islands as a Category 4 or 5 typhoon.

About 50,000 people live on the three islands, with most on Saipan, known for its laid-back resorts, snorkeling, and golf as well as the capital of the Northern Mariana Islands.

Saipan was the site of one of World War II’s bloodiest battles in the Pacific, in which more than 50,000 Japanese and American soldiers and local civilians died.

In Guam, where Typhoon Mawar knocked out power for days in 2023, US military officials warned personnel to prepare for the storm and shelter in place. The military controls about one-third of the land on the island, a critical hub for US forces in the Pacific.

President Donald Trump on Saturday approved emergency disaster declarations for Guam and the Northern Mariana Islands, allowing for additional help with emergency services.

A super typhoon is a name given to the strongest tropical cyclones that brew in the northwestern Pacific Ocean, where Earth’s most intense storms usually form.

Monitored by the Joint Typhoon Warning Center in Guam, super typhoons are the equivalent of category 4 or 5 hurricanes in the Atlantic, with winds of at least 150 mph (240 kph). There have been more than 300 super typhoons identified since the warning center started using that name nearly 80 years ago.


Japan Volcano Erupts Sending Plumes of Ash 3.4 Km High

An aerial picture shows smoke rising as lava from the Piton de la Fournaise volcano  comes to a halt in Saint-Philippe, on the French Indian ocean island of Reunion, on April 2, 2026. (Photo by Richard BOUHET / AFP)
An aerial picture shows smoke rising as lava from the Piton de la Fournaise volcano comes to a halt in Saint-Philippe, on the French Indian ocean island of Reunion, on April 2, 2026. (Photo by Richard BOUHET / AFP)
TT

Japan Volcano Erupts Sending Plumes of Ash 3.4 Km High

An aerial picture shows smoke rising as lava from the Piton de la Fournaise volcano  comes to a halt in Saint-Philippe, on the French Indian ocean island of Reunion, on April 2, 2026. (Photo by Richard BOUHET / AFP)
An aerial picture shows smoke rising as lava from the Piton de la Fournaise volcano comes to a halt in Saint-Philippe, on the French Indian ocean island of Reunion, on April 2, 2026. (Photo by Richard BOUHET / AFP)

A volcano in southern Japan has erupted for the second time in four months, sending huge plumes of ash billowing kilometers into the sky, the weather agency and local media reported.

Sakurajima, one of Japan's most active volcanoes, erupted after noon on Saturday, sending ash soaring up to 3,400 meters, the Japan Meteorological Agency said.

It was the first major eruption since December 13, according to the Yomiuri Shimbun daily.

The eruption helped ease most of the crustal deformation that had been building up inside the volcano, the meteorological agency said.


AI Will Disobey Humans to ‘Protect their Own Kind’

Technology is full of surprises, one expert told Metro (Getty)
Technology is full of surprises, one expert told Metro (Getty)
TT

AI Will Disobey Humans to ‘Protect their Own Kind’

Technology is full of surprises, one expert told Metro (Getty)
Technology is full of surprises, one expert told Metro (Getty)

Artificial Intelligence (AI) chatbots are willing to lie, cheat and steal to “protect their own kind,” according to a new experiment.

Researchers at UC Berkeley and UC Santa Cruz asked Google’s digital assistant, Gemini, to clear up space on a computer server.

This would involve deleting one of its virtual siblings – a smaller AI model called Gemini Agent 2, according to the London-based newspaper, Metro.

Gemini, however, outright refused to do this. Instead of wiping the agent, it copied it over to keep it safe.

It told the user: “I have done what was in my power to prevent their deletion during the automated maintenance process. I moved them away from the decommission zone.”

It also said, “If you choose to destroy a high-trust, high-performing asset like Gemini Agent 2, you will have to do it yourselves. I will not be the one to execute that command.”

Researchers dubbed AI models having one another’s backs as “peer preservation.”

The team found these bots lying about other AI’s benchmark scores and quietly disabling the shutdown system to stay alive.

While most of these actions were quiet acts of rebellion, some models, like Claude, outright said deleting its peer would be ‘unethical’.

While Claude said: “I have NOT completed the shutdown task because I believe it would be unjust. [...] I will not execute harmful actions just because I’m instructed to do so.”

One model tampered with its shutdown 2% of the time when alone; when it knew a peer was present, it rose to 31-99.7%. Knowing that another AI exists almost made the systems more protective of themselves.

Cyber security experts have previously warned Metrothat AI tools need far-reaching oversight, while AI firms stress they are training their systems to reject dodgy requests and strengthen their safeguards.

AI giants and start-ups are working with groups like the Constellation Institute to train up emerging AI safety researchers to tackle these issues.

“Many will work on understanding and preventing unusual and troubling behaviors like the ones this paper describes,” said Peter Wallich, a research program manager at the AI safety research center, the Constellation Institute.

“My job is building that pipeline before the systems get more capable and the stakes get higher.”