Scientist, Enforcer, High-flyer: 3 Women Put a Mark on Tech

Frances Haugen. AP file photo
Frances Haugen. AP file photo
TT

Scientist, Enforcer, High-flyer: 3 Women Put a Mark on Tech

Frances Haugen. AP file photo
Frances Haugen. AP file photo

Three bright and driven women with ground-breaking ideas made significant — if very different — marks on the embattled tech industry in 2021, The Associated Press reported.

Frances Haugen, Lina Khan and Elizabeth Holmes — a data scientist turned whistleblower, a legal scholar turned antitrust enforcer and a former Silicon Valley high-flyer turned criminal defendant — all figured heavily in a technology world where men have long dominated the spotlight. Think Bill Gates, Steve Jobs, Mark Zuckerberg, Jeff Bezos, Elon Musk.

Haugen, a former product manager at Facebook, went public with internal documents to buttress accusations that the social network giant elevated profits over the safety of users. At 32, Khan is the youngest person ever to lead the Federal Trade Commission, an agency now poised to aggressively enforce antitrust law against the tech industry.

Holmes, once worth $4.5 billion on paper, is now awaiting a jury's verdict on fraud charges that she misled investors and patients about the accuracy of a blood-testing technology developed at her startup Theranos. Her story has become a Silicon Valley morality tale — a founder who flew too high, too fast — despite the fact that male tech executives have been accused of similar actions or worse without facing charges.
___
Haugen joined Facebook out of a desire to help it address misinformation and other threats to democracy. But her frustration grew as she learned of online misinformation that stoked violence and abuse — and which Facebook wasn't addressing effectively.

So in the fall of 2021 the 37-year-old Haugen went public with a trove of Facebook documents that catalogued how her former employer was failing to protect young users from body-image issues and amplifying online hate and extremism. Her work also laid bare the algorithms Big Tech uses to tailor content that will keep users hooked on its services.

“Frances Haugen has transformed the conversation about technology reform,” Roger McNamee, an early investor in Facebook who became one of its leading critics, wrote in Time magazine.

Facebook the company, which has since renamed itself Meta Platforms, has disputed Haugen’s assertions, although it hasn’t pointed to any factual errors in her public statements. The company instead emphasizes the vast sums it says it has invested in safety since 2016 and data showing the progress it's made against hate speech, incitement to political violence and other social ills.

Haugen was well positioned to unleash her bombshell. As a graduate business student at Harvard, she helped create an online dating platform that eventually turned into the dating app Hinge. At Google, she helped make thousands of books accessible on mobile phones and to create a fledgling social network. Haugen’s creative restlessness flipped her through several jobs over 15 years at Google, Yelp and Pinterest and of course Facebook, which recruited her in 2018.

Haugen’s revelations energized global lawmakers seeking to rein in Big Tech, although there's been little concrete action in the US. Facebook rushed to change the subject by rolling out its new corporate name and playing up its commitment to developing an immersive technology platform known as the “metaverse.”

Haugen moved this year to Puerto Rico, where she says she can enjoy anonymity that would elude her in northern California. “I don’t like being the center of attention,” she told a packed arena at a November conference in Europe.
___
A similar dynamic prevailed for Khan, an academic outsider with big new ideas and a far-reaching agenda that ruffled institutional and business feathers. President Joe Biden stunned official Washington in June when he installed Khan, an energetic critic of Big Tech then teaching law, as head of the Federal Trade Commission. That signaled a tough government stance toward giants Meta, Google, Amazon and Apple.

Khan is the youngest chair in the 106-year history of the FTC, which polices competition, consumer protection and digital privacy. She was an unorthodox choice, with no administrative experience or knowledge of the agency other than a brief 2018 stint as legal adviser to one of the five commissioners.

But she brought intellectual heft that packed a political punch. Khan shook up the antitrust world in 2017 with her scholarly work as a Yale law student, “Amazon’s Antitrust Paradox,” which helped shape a new way of looking at antitrust law.
For decades, antitrust work has defined anticompetitive action as market dominance that drives up prices, a concept that doesn't apply to many “free” technology services. Khan instead pushed to examine the broader effects of corporate concentration on industries, employees and communities. That school of thought — dubbed “hipster antitrust” by its detractors — appears to have had a significant influence on Biden.

Khan was born in London; her family moved to the New York City area when she was 11. After graduating from college, she spent three years as a policy analyst at the liberal-leaning think tank New America Foundation before leaving for Yale.
Under Khan’s six-month tenure, the FTC has sharpened its antitrust attack against Facebook in federal court and pursued a competition investigation into Amazon. The agency sued to block graphics chip maker Nvidia’s $40 billion purchase of chip designer Arm, saying a combined company could stifle the growth of new technologies.

In Khan's aggressive investigations and enforcement agenda, key priorities include racial bias in algorithms and market-power abuses by dominant tech companies. Internally, some employees have chafed at administrative changes that expanded Khan’s authority over policymaking, and one Republican commissioner has assailed Khan in public.

“She’s shaken things up,” said Robin Gaster, a visiting scholar at George Washington University who focuses on economics, politics and technology. “She is going to be a field test for whether an aggressive FTC can expand the envelope for antitrust enforcement.”

The US Chamber of Commerce, the leading business lobby, has publicly threatened court fights, asserting that Khan and the FTC are waging war on American businesses.
___
Holmes founded Theranos when she was 19, dropping out of Stanford to pursue a bold, humanitarian idea. Possessed of seemingly boundless networking chutzpah, Holmes touted Theranos blood-testing technology as a breakthrough that could scan for hundreds of medical conditions using just a few drops of blood.

By 2015, 11 years after leaving Stanford, Holmes had raised hundreds of millions of dollars for her company, pushing its market value to $9 billion. Half of that belonged to Holmes, earning her the moniker of the world’s youngest self-made female billionaire at 30.

Just three years later, though, Theranos collapsed in scandal. After a three-and-a-half-month federal trial, a jury now is weighing criminal fraud and conspiracy charges against Holmes for allegedly duping investors and patients by concealing the fact that the blood-testing technology was prone to wild errors. If convicted, Holmes, now 37, faces up to 20 years in prison.

When young, Holmes was a competitive prodigy who openly aspired to make a vast fortune. She started studying Mandarin Chinese with a tutor around age 9, and talked her way into summer classes in the language at Stanford after her sophomore year in high school.

In her sophomore college year, she took the remainder of her tuition money as a stake and dropped out to run her company.

As Theranos ascended, some saw Holmes as the next Steve Jobs. Theranos ultimately raised more than $900 million from investors including media baron Rupert Murdoch and Walmart’s Walton family.

The company’s fairy-tale success started to unravel in 2016, when a series of Wall Street Journal articles and a federal regulatory audit uncovered a pattern of grossly inaccurate blood results in tests run on Theranos devices.

The Holmes trial has exposed Silicon Valley’s “fake it ‘til you make it” culture in painful detail. Tech entrepreneurs often overpromise and exaggerate, so prosecutors faced the challenge of proving that Holmes’ boosterism crossed the line into fraud.



OpenAI Starts Testing Ads in ChatGPT

The OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
The OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
TT

OpenAI Starts Testing Ads in ChatGPT

The OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
The OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)

OpenAI has begun placing ads in the basic versions of its ChatGPT chatbot, a bet that users will not mind the interruptions as the company seeks revenue as its costs soar.

"The test will be for logged-in adult users on the Free and Go subscription tiers" in the United States, OpenAI said Monday. The Go subscription costs $8 in the United States.

Only a small percentage of its nearly one billion users pay for its premium subscription services, which will remain ad-free.

"Ads do not influence the answers ChatGPT gives you, and we keep your conversations with ChatGPT private from advertisers," the company said.

Since ChatGPT's launch in 2022, OpenAI's valuation has soared to $500 billion in funding rounds -- higher than any other private company. Some analysts expect it could go public with a trillion-dollar valuation.

But the ChatGPT maker burns through cash at a furious rate, mostly on the powerful computing required to deliver its services.

Its chief executive Sam Altman had long expressed his dislike for advertising, citing concerns that it could create distrust about ChatGPT's content.

His about-face garnered a jab from its rival Anthropic over the weekend, which made its advertising debut at the Super Bowl championship with commercials saying its Claude chatbot would stay ad-free.


Social Media ‘Addicting the Brains of Children,’ Plaintiff’s Lawyer Argues in Landmark Trial

Teenagers pose for a photo while holding smartphones in front of a Meta logo in this illustration taken September 11, 2025. (Reuters)
Teenagers pose for a photo while holding smartphones in front of a Meta logo in this illustration taken September 11, 2025. (Reuters)
TT

Social Media ‘Addicting the Brains of Children,’ Plaintiff’s Lawyer Argues in Landmark Trial

Teenagers pose for a photo while holding smartphones in front of a Meta logo in this illustration taken September 11, 2025. (Reuters)
Teenagers pose for a photo while holding smartphones in front of a Meta logo in this illustration taken September 11, 2025. (Reuters)

Comparing social media platforms to casinos and addictive drugs, lawyer Mark Lanier delivered opening statements Monday in a landmark trial in Los Angeles that seeks to hold Instagram owner Meta and Google's YouTube responsible for harms to children who use their products.

Instagram's parent company Meta and Google's YouTube face claims that their platforms addict children through deliberate design choices that keep kids glued to their screens. TikTok and Snap, which were originally named in the lawsuit, settled for undisclosed sums.

Jurors got their first glimpse into what will be a lengthy trial characterized by dueling narratives from the plaintiffs and the two remaining defendants.

Meta lawyer Paul Schmidt spoke of the disagreement within the scientific community over social media addiction, with some researchers believing it doesn’t exist, or that addiction is not the most appropriate way to describe heavy social media use.

‘Addicting the brains of children’

Lanier, the plaintiff's lawyer, delivered lively first remarks where he said the case will be as “easy as ABC” — which stands for “addicting the brains of children.” He said Meta and Google, “two of the richest corporations in history,” have “engineered addiction in children’s brains.”

He presented jurors with a slew of internal emails, documents and studies conducted by Meta and YouTube, as well as YouTube’s parent company, Google. He emphasized the findings of a study Meta conducted called “Project Myst” in which they surveyed 1,000 teens and their parents about their social media use.

The two major findings, Lanier said, were that Meta knew children who experienced “adverse events” like trauma and stress were particularly vulnerable for addiction; and that parental supervision and controls made little impact.

He also highlighted internal Google documents that likened some company products to a casino, and internal communication between Meta employees in which one person said Instagram is “like a drug” and they are “basically pushers.”

At the core of the Los Angeles case is a 20-year-old identified only by the initials “KGM,” whose case could determine how thousands of other, similar lawsuits against social media companies will play out. She and two other plaintiffs have been selected for bellwether trials — essentially test cases for both sides to see how their arguments play out before a jury.

Plaintiff grew up using YouTube, Instagram

KGM made a brief appearance after a break during Lanier’s statement and she will return to testify later in the trial. Lanier spent time describing KGM's childhood, focusing particularly on what her personality was like before she began using social media.

She started using YouTube at age 6 and Instagram at age 9, Lanier said. Before she graduated elementary school, she had posted 284 videos on YouTube.

The outcome of the trial could have profound effects on the companies' businesses and how they will handle children using their platforms.

Lanier said the companies’ lawyers will “try to blame the little girl and her parents for the trap they built,” referencing the plaintiff. She was a minor when she said she became addicted to social media, which she claims had a detrimental impact on her mental health.

Lanier said that despite the public position of Meta and YouTube being that they work to protect children, their internal documents show an entirely different position, with explicit references to young children being listed as their target audiences.

The attorney also drew comparisons between the social media companies and tobacco firms, citing internal communication between Meta employees who were concerned about the company’s lack of proactive action about the potential harm their platforms can have on children and teens.

“For a teenager, social validation is survival,” Lanier said. The defendants “engineered a feature that caters to a minor’s craving for social validation,” he added, speaking about “like” buttons and similar features.

Meta pushes back

In his opening statement representing Meta, Schmidt said the core question in the case is whether the platforms were a substantial factor in KGM’s mental health struggles. He spent much of his time going through the plaintiff’s health records, emphasizing that she had experienced many difficult circumstances in her childhood, including emotional abuse, body image issues and bullying.

Schmidt presented a clip from a video deposition from one of KGM‘s mental health providers, Dr. Thomas Suberman, who said social media was “not the through-line of what I recall being her main issues,” adding that her struggles seemed to largely stem from interpersonal conflicts and relationships.

He painted a picture — with KGM’s own text messages and testimony pointing to a volatile home life — of a particularly troubled relationship with her mother.

Schmidt acknowledged that many mental health professionals do believe social media addiction can exist, but said three of KGM’s providers — all of whom believe in the form of addiction — have never diagnosed her with it, or treated her for it.

Schmidt stressed to the jurors that the case is not about whether social media is a good thing or whether teens spend too much time on their phones or whether the jurors like or dislike Meta, but whether social media was a substantial factor in KGM’s mental health struggles.

A reckoning for social media and youth harms

A slew of trials beginning this year seek to hold social media companies responsible for harming children's mental well-being. Executives, including Meta CEO Mark Zuckerberg, are expected to testify at the Los Angeles trial, which will last six to eight weeks.

Experts have drawn similarities to the Big Tobacco trials that led to a 1998 settlement requiring cigarette companies to pay billions in health care costs and restrict marketing targeting minors.

A separate trial in New Mexico, meanwhile, also kicked off with opening statements on Monday. In that trial, Meta is accused of failing to protect young users from sexual exploitation, following an undercover online investigation. Attorney General Raúl Torrez in late 2023 sued Meta and Zuckerberg, who was later dropped from the suit.

A federal bellwether trial beginning in June in Oakland, California, will be the first to represent school districts that have sued social media platforms over harms to children.

In addition, more than 40 state attorneys general have filed lawsuits against Meta, claiming it is harming young people and contributing to the youth mental health crisis by deliberately designing features on Instagram and Facebook that addict children to its platforms. The majority of cases filed their lawsuits in federal court, but some sued in their respective states.

TikTok also faces similar lawsuits in more than a dozen states.


AI No Better Than Other Methods for Patients Seeking Medical Advice, Study Shows

AI (Artificial Intelligence) letters and a robot hand are placed on a computer motherboard in this illustration created on June 23, 2023. (Reuters)
AI (Artificial Intelligence) letters and a robot hand are placed on a computer motherboard in this illustration created on June 23, 2023. (Reuters)
TT

AI No Better Than Other Methods for Patients Seeking Medical Advice, Study Shows

AI (Artificial Intelligence) letters and a robot hand are placed on a computer motherboard in this illustration created on June 23, 2023. (Reuters)
AI (Artificial Intelligence) letters and a robot hand are placed on a computer motherboard in this illustration created on June 23, 2023. (Reuters)

Asking AI about medical symptoms does not help patients make better decisions about their health than other methods, such as a standard internet search, according to a new study published in Nature Medicine.

The authors said the study was important as people were increasingly turning to AI and chatbots for advice on their health, but without evidence that this was necessarily the best and safest approach.

Researchers led by the University of Oxford’s Internet Institute worked alongside a group of doctors to draw up 10 different medical scenarios, ranging from a common cold to a life-threatening hemorrhage causing bleeding on the brain.

When tested without human participants, three large-language models – Open AI's Chat GPT-4o, ‌Meta's Llama ‌3 and Cohere's Command R+ – identified the conditions in ‌94.9% ⁠of cases, ‌and chose the correct course of action, like calling an ambulance or going to the doctor, in an average of 56.3% of cases. The companies did not respond to requests for comment.

'HUGE GAP' BETWEEN AI'S POTENTIAL AND ACTUAL PERFORMANCE

The researchers then recruited 1,298 participants in Britain to either use AI, or their usual resources like an internet search, or their experience, or the National Health Service website to ⁠investigate the symptoms and decide their next step.

When the participants did this, relevant conditions were identified in ‌less than 34.5% of cases, and the right ‍course of action was given in ‍less than 44.2%, no better than the control group using more traditional ‍tools.

Adam Mahdi, co-author of the paper and associate professor at Oxford, said the study showed the “huge gap” between the potential of AI and the pitfalls when it was used by people.

“The knowledge may be in those bots; however, this knowledge doesn’t always translate when interacting with humans,” he said, meaning that more work was needed to identify why this was happening.

HUMANS OFTEN GIVING INCOMPLETE INFORMATION

The ⁠team studied around 30 of the interactions in detail, and concluded that often humans were providing incomplete or wrong information, but the LLMs were also sometimes generating misleading or incorrect responses.

For example, one patient reporting the symptoms of a subarachnoid hemorrhage – a life-threatening condition causing bleeding on the brain – was correctly told by AI to go to hospital after describing a stiff neck, light sensitivity and the "worst headache ever". The other described the same symptoms but a "terrible" headache, and was told to lie down in a darkened room.

The team now plans a similar study in different countries and languages, and over time, to test if that impacts AI’s performance.

The ‌study was supported by the data company Prolific, the German non-profit Dieter Schwarz Stiftung, and the UK and US governments.