Privacy Mistakes that Keep Security Experts Always Cautious

A lock icon, signifying an encrypted Internet connection, is seen on an Internet Explorer browser in a photo illustration in Paris. REUTERS/Mal Langsdon
A lock icon, signifying an encrypted Internet connection, is seen on an Internet Explorer browser in a photo illustration in Paris. REUTERS/Mal Langsdon
TT

Privacy Mistakes that Keep Security Experts Always Cautious

A lock icon, signifying an encrypted Internet connection, is seen on an Internet Explorer browser in a photo illustration in Paris. REUTERS/Mal Langsdon
A lock icon, signifying an encrypted Internet connection, is seen on an Internet Explorer browser in a photo illustration in Paris. REUTERS/Mal Langsdon

When it comes to privacy, it's the little things that can lead to big mishaps.

Privacy and security are often thought of as one and the same. While they are related, privacy has become its own discipline, which means security experts need to become more familiar with the subtle types of mistakes that can lead to some dangerous privacy snafus.

- Privacy System

With General Data Protection Regulation (GDPR) going live last spring in Europe and the California privacy law becoming effective in 2020, companies should expect privacy to become more of an issue in the years ahead. Colorado and Vermont have passed privacy laws, as has Brazil, and India is well on its way to passing one of its own.

Mark Bower, general manager and chief revenue officer at Egress Software Technologies, says that first and foremost, companies have to think of privacy by design.

Privacy by design requires companies to ask the following questions: What type of data are we storing? For what business purposes? Does the data need to be encrypted? How will the data be destroyed when it becomes obsolete, and how long a period will that be? Are there compliance regulations that stipulate data destruction requirements? How will the company protect personally identifiable information for credit cards and medical information?

- Emails mishaps

1. The Accidental email: Egress Software's Bower told the Dark Readings website that many misdirected emails are sent because users type in the first couple of letters of a name and go with what pops up first. While training users to check the To: field twice before hitting "send" can help, new machine-learning and AI technologies can track patterns of who users typically send emails to and have them double check they are sending them to the right people. For salespeople or reporters in the media who deal with lots of new contacts, the system can flag that this is the first time they are connecting with this person and ask whether they really want to send that attachment.

2. Somebody forwards a corporate email to a friend, spouse, or personal account: companies need to rethink how they want to control corporate information they send to their staffs, Egress Software’s Bower adds. The emails could be about something seemingly innocuous, like holiday plans, or inside information about a new product. Either way, companies have to decide whether they're going to let people forward them to people outside of the company or restrict or block people from sending them.

3. A user adds a new person to an email string who shouldn't have access: emails can get into the wrong hands when someone adds a person to a thread to keep him in the loop, but then somebody else includes confidential information that the added person shouldn't have access to, Bower points out. Once again, people need to be trained on how to be more sensitive to email strings and who really needs to see the information being sent. Technologies that use AI and machine learning can help, he says, and they can be used to block access if it's discovered that information has been sent to somebody who does not have proper access rights.

- Sync and Share

4. A 'Sync and Share' causes a potential data breach: Chuck Holland, director of product management at Vera Security sees that companies have to rethink their BYOD policies because every time an employee syncs a mobile device, she is syncing data to her personal cloud. Similarly, and maybe worse for the employee, she could be syncing her information to the corporate network.

5. Companies don't practice good off-boarding routines: Holland says companies have to do a better job off-boarding when an employee leaves for another job or for performance reasons. Too often, companies leave old accounts open, and sensitive information could be stored on the hard drives of their computers or in emails. Companies need to understand that hackers look for those types of accounts for information they can sell or to launch widespread attacks.

6. Companies don't encrypt email and data transfers: companies should never send unencrypted data or emails over the corporate network, a BigID's official says. Specific departments that should think extra carefully about privacy and taking care of sensitive personal and corporate information include human resources, marketing, advertising, and accounting, she adds.

7. During M&As, companies use privacy as a bargaining chip: while companies take privacy into account during a merger or acquisition, very often they will use it to have the other company reduce the purchase price, BigID's Farber says. However, after the merger, instead of taking money saved and investing it in privacy and security, it will just move it to the bottom line.



Siemens Energy Trebles Profit as AI Boosts Power Demand

FILED - 05 August 2025, Berlin: The "Siemens Energy" logo can be seen in the entrance area of the company. Photo: Britta Pedersen/dpa
FILED - 05 August 2025, Berlin: The "Siemens Energy" logo can be seen in the entrance area of the company. Photo: Britta Pedersen/dpa
TT

Siemens Energy Trebles Profit as AI Boosts Power Demand

FILED - 05 August 2025, Berlin: The "Siemens Energy" logo can be seen in the entrance area of the company. Photo: Britta Pedersen/dpa
FILED - 05 August 2025, Berlin: The "Siemens Energy" logo can be seen in the entrance area of the company. Photo: Britta Pedersen/dpa

German turbine maker Siemens Energy said Wednesday that its quarterly profits had almost tripled as the firm gains from surging demand for electricity driven by the artificial intelligence boom.

The company's gas turbines are used to generate electricity for data centers that provide computing power for AI, and have been in hot demand as US tech giants like OpenAI and Meta rapidly build more of the sites.

Net profit in the group's fiscal first quarter, to end-December, climbed to 746 million euros ($889 million) from 252 million euros a year earlier.

Orders -- an indicator of future sales -- increased by a third to 17.6 billion euros.

The company's shares rose over five percent in Frankfurt trading, putting the stock up about a quarter since the start of the year and making it the best performer to date in Germany's blue-chip DAX index.

"Siemens Energy ticked all of the major boxes that investors were looking for with these results," Morgan Stanley analysts wrote in a note, adding that the company's gas turbine orders were "exceptionally strong".

US data center electricity consumption is projected to more than triple by 2035, according to the International Energy Agency, and already accounts for six to eight percent of US electricity use.

Asked about rising orders on an earnings call, Siemens Energy CEO Christian Bruch said he thought the first-quarter figures were not "particularly strong" and that further growth could be expected.

"Demand for gas turbines is extremely high," he said. "We're talking about 2029 and 2030 for delivery dates."

Siemens Energy, spun out of the broader Siemens group in 2020, said last week that it would spend $1 billion expanding its US operations, including a new equipment plant in Mississippi as part of wider plans that would create 1,500 jobs.

Its shares have increased over tenfold since 2023, when the German government had to provide the firm with credit guarantees after quality problems at its wind-turbine unit.


Instagram Boss to Testify at Social Media Addiction Trial 

The Instagram app icon is seen on a smartphone in this illustration taken October 27, 2025. (Reuters)
The Instagram app icon is seen on a smartphone in this illustration taken October 27, 2025. (Reuters)
TT

Instagram Boss to Testify at Social Media Addiction Trial 

The Instagram app icon is seen on a smartphone in this illustration taken October 27, 2025. (Reuters)
The Instagram app icon is seen on a smartphone in this illustration taken October 27, 2025. (Reuters)

Instagram chief Adam Mosseri is to be called to testify Wednesday in a Los Angeles courtroom by lawyers out to prove social media is dangerously addictive by design to young, vulnerable minds.

YouTube and Meta -- the parent company of Instagram and Facebook -- are defendants in a blockbuster trial that could set a legal precedent regarding whether social media giants deliberately designed their platforms to be addictive to children.

Rival lawyers made opening remarks to jurors this week, with an attorney for YouTube insisting that the Google-owned video platform was neither intentionally addictive nor technically social media.

"It's not social media addiction when it's not social media and it's not addiction," YouTube lawyer Luis Li told the 12 jurors during his opening remarks.

The civil trial in California state court centers on allegations that a 20-year-old woman, identified as Kaley G.M., suffered severe mental harm after becoming addicted to social media as a child.

She started using YouTube at six and joined Instagram at 11, before moving on to Snapchat and TikTok two or three years later.

The plaintiff "is not addicted to YouTube. You can listen to her own words -- she said so, her doctor said so, her father said so," Li said, citing evidence he said would be detailed at trial.

Li's opening arguments followed remarks on Monday from lawyers for the plaintiffs and co-defendant Meta.

On Monday, the plaintiffs' attorney Mark Lanier told the jury YouTube and Meta both engineer addiction in young people's brains to gain users and profits.

"This case is about two of the richest corporations in history who have engineered addiction in children's brains," Lanier said.

"They don't only build apps; they build traps."

But Li told the six men and six women on the jury that he did not recognize the description of YouTube put forth by the other side and tried to draw a clear line between YouTube's widely popular video app and social media platforms like Instagram or TikTok.

YouTube is selling "the ability to watch something essentially for free on your computer, on your phone, on your iPad," Li insisted, comparing the service to Netflix or traditional TV.

Li said it was the quality of content that kept users coming back, citing internal company emails that he said showed executives rejecting a pursuit of internet virality in favor of educational and more socially useful content.

- 'Gateway drug' -

Stanford University School of Medicine professor Anna Lembke, the first witness called by the plaintiffs, testified that she views social media, broadly speaking, as a drug.

The part of the brain that acts as a brake when it comes to having another hit is not typically developed before a person is 25 years old, Lembke, the author of the book "Dopamine Nation," told jurors.

"Which is why teenagers will often take risks that they shouldn't and not appreciate future consequences," Lembke testified.

"And typically, the gateway drug is the most easily accessible drug," she said, describing Kaley's first use of YouTube at the age of six.

The case is being treated as a bellwether proceeding whose outcome could set the tone for a wave of similar litigation across the United States.

Social media firms face hundreds of lawsuits accusing them of leading young users to become addicted to content and suffer from depression, eating disorders, psychiatric hospitalization, and even suicide.

Lawyers for the plaintiffs are borrowing strategies used in the 1990s and 2000s against the tobacco industry, which faced a similar onslaught of lawsuits arguing that companies knowingly sold a harmful product.


OpenAI Starts Testing Ads in ChatGPT

The OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
The OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
TT

OpenAI Starts Testing Ads in ChatGPT

The OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
The OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)

OpenAI has begun placing ads in the basic versions of its ChatGPT chatbot, a bet that users will not mind the interruptions as the company seeks revenue as its costs soar.

"The test will be for logged-in adult users on the Free and Go subscription tiers" in the United States, OpenAI said Monday. The Go subscription costs $8 in the United States.

Only a small percentage of its nearly one billion users pay for its premium subscription services, which will remain ad-free.

"Ads do not influence the answers ChatGPT gives you, and we keep your conversations with ChatGPT private from advertisers," the company said.

Since ChatGPT's launch in 2022, OpenAI's valuation has soared to $500 billion in funding rounds -- higher than any other private company. Some analysts expect it could go public with a trillion-dollar valuation.

But the ChatGPT maker burns through cash at a furious rate, mostly on the powerful computing required to deliver its services.

Its chief executive Sam Altman had long expressed his dislike for advertising, citing concerns that it could create distrust about ChatGPT's content.

His about-face garnered a jab from its rival Anthropic over the weekend, which made its advertising debut at the Super Bowl championship with commercials saying its Claude chatbot would stay ad-free.