As AI Gains a Workplace Foothold, States are Trying to Make Sure Workers Don't Get Left Behind

Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. (Reuters)
Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. (Reuters)
TT

As AI Gains a Workplace Foothold, States are Trying to Make Sure Workers Don't Get Left Behind

Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. (Reuters)
Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. (Reuters)

With many jobs expected to eventually rely on generative artificial intelligence, states are trying to help workers beef up their tech skills before they become outdated and get outfoxed by machines that are becoming increasingly smarter.
Connecticut is working to create what proponents believe will be the country's first Citizens AI Academy, a free online repository of curated classes that users can take to learn basic skills or obtain a certificate needed for employment, The Associated Press said.
“This is a rapidly evolving area," said state Democratic Sen. James Maroney. "So we need to all learn what are the best sources for staying current. How can we update our skills? Who can be trusted sources?”
Determining what skills are necessary in an AI world can be a challenge for state legislators given the fast-moving nature of the technology and differing opinions about what approach is best.
Gregory LaBlanc, professor of Finance, Strategy and Law at the Haas School of Business at Berkeley Law School in California, says workers should be taught how to use and manage generative AI rather than how the technology works, partly because computers will soon be better able to perform certain tasks previously performed by humans.
“What we need is to lean into things that complement AI as opposed to learning to be really bad imitators of AI," he said. “We need to figure out what is AI not good at and then teach those things. And those things are generally things like creativity, empathy, high level problem solving.”
He said historically people have not needed to understand technological advancements in order for them to succeed.
“When electricity came along, we didn’t tell everybody that they needed to become electrical engineers,” LeBlanc said.
This year, at least four states — Connecticut, California, Mississippi and Maryland — proposed legislation that attempted to deal with AI in the classroom somehow. They ranged from Connecticut's planned AI Academy, which was originally included in a wide-ranging AI regulation bill that failed but the concept is still being developed by state education officials, to proposed working groups that examine how AI can be incorporated safely in public schools. Such a bill died in the Mississippi legislature while the others remain in flux.
One bill in California would require a state working group to consider incorporating AI literacy skills into math, science, history and social science curriculums.
“AI has the potential to positively impact the way we live, but only if we know how to use it, and use it responsibly,” said the bill's author, Assemblymember Marc Berman, in a statement. “No matter their future profession, we must ensure that all students understand basic AI principles and applications, that they have the skills to recognize when AI is employed, and are aware of AI’s implications, limitations, and ethical considerations."
The bill is backed by the California Chamber of Commerce. CalChamber Policy Advocate Ronak Daylami said in a statement that incorporating information into existing school curricula will “dispel the stigma and mystique of the technology, not only helping students become more discerning and intentional users and consumers of AI, but also better positioning future generations of workers to succeed in an AI-driven workforce and hopefully inspiring the next generation of computer scientists.”
While Connecticut's planned AI Academy is expected to offer certificates to people who complete certain skills programs that might be needed for careers, Maroney said the academy will also include the basics, from digital literacy to how to pose questions to a chatbot.
He said it's important for people to have the skills to understand, evaluate and effectively interact with AI technologies, whether it’s a chatbot or machines that learn to identify problems and make decisions that mimic human decision-making.
“Most jobs are going to require some form of literacy,” Maroney said. “I think that if you aren’t learning how to use it, you’ll be at a disadvantage."
A September 2023 study released by the job-search company Indeed found all US jobs listed on the platform had skills that could be performed or augmented by generative AI. Nearly 20% of the jobs were considered “highly exposed,” which means the technology is considered good or excellent at 80% or more of the skills that were mentioned in the Indeed job listings.
Nearly 46% of the jobs on the platform were “moderately exposed,” which means the GenAI can perform 50% to 80% of the skills.
Maroney said he is concerned how that skills gap — coupled with a lack of access to high-speed internet, computers and smart phones in some underserved communities — will exacerbate the inequity problem.
A report released in February from McKinsey and Company, a global management consulting firm, projected that generative AI could increase household wealth in the US by nearly $500 billion by 2045, but it would also increase the wealth gap between Black and white households by $43 billion annually.
Advocates have been working for years to narrow the nation’s digital skills gap, often focusing on the basics of computer literacy and improving access to reliable internet and devices, especially for people living in urban and rural areas. The advent of AI brings additional challenges to that task, said Marvin Venay, chief external affairs and advocacy officer for the Massachusetts-based organization Bring Tech Home.
“Education must be included in order for this to really take off publicly ... in a manner which is going to give people the ability to eliminate their barriers,” he said of AI. “And it has to be able to explain to the most common individual why it is not only a useful tool, but why this tool will be something that can be trusted.”
Tesha Tramontano-Kelly, executive director of the Connecticut-based group CfAL for Digital Inclusion, said she worries lawmakers are “putting the cart before the horse” when it comes to talking about AI training. Ninety percent of the youths and adults who use her organization's free digital literacy classes don't have a computer in the home.
While Connecticut is considered technologically advanced compared to many other states and nearly every household can get internet service, a recent state digital equity study found only about three-quarters subscribe to broadband. A survey conducted as part of the study found 47% of respondents find it somewhat or very difficult to afford internet service.
Of residents who reported household income at or below 150% of the federal poverty level, 32% don't own a computer and 13% don't own any internet enabled device.
Tramontano-Kelly said ensuring the internet is accessible and technology equipment is affordable are important first steps.
“So teaching people about AI is super important. I 100% agree with this,” she said. “But the conversation also needs to be about everything else that goes along with AI."



Foxconn to Invest $510 Million in Kaohsiung Headquarters in Taiwan

Construction is scheduled to start in 2027, with completion targeted for 2033. Reuters
Construction is scheduled to start in 2027, with completion targeted for 2033. Reuters
TT

Foxconn to Invest $510 Million in Kaohsiung Headquarters in Taiwan

Construction is scheduled to start in 2027, with completion targeted for 2033. Reuters
Construction is scheduled to start in 2027, with completion targeted for 2033. Reuters

Foxconn, the world’s largest contract electronics maker, said on Friday it will invest T$15.9 billion ($509.94 million) to build its Kaohsiung headquarters in southern Taiwan.

That would include a mixed-use commercial and office building and a residential tower, it said. Construction is scheduled to start in 2027, with completion targeted for 2033.

Foxconn said the headquarters will serve as an important hub linking its operations across southern Taiwan, and once completed will house its smart-city team, software R&D teams, battery-cell R&D teams, EV technology development center and AI application software teams.

The Kaohsiung city government said Foxconn’s investments in the city have totaled T$25 billion ($801.8 million) over the past three years.


Open AI, Microsoft Face Lawsuit Over ChatGPT's Alleged Role in Connecticut Murder-Suicide

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
TT

Open AI, Microsoft Face Lawsuit Over ChatGPT's Alleged Role in Connecticut Murder-Suicide

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)

The heirs of an 83-year-old Connecticut woman are suing ChatGPT maker OpenAI and its business partner Microsoft for wrongful death, alleging that the artificial intelligence chatbot intensified her son's “paranoid delusions” and helped direct them at his mother before he killed her.

Police said Stein-Erik Soelberg, 56, a former tech industry worker, fatally beat and strangled his mother, Suzanne Adams, and killed himself in early August at the home where they both lived in Greenwich, Connecticut, The AP news reported.

The lawsuit filed by Adams' estate on Thursday in California Superior Court in San Francisco alleges OpenAI “designed and distributed a defective product that validated a user’s paranoid delusions about his own mother.” It is one of a growing number of wrongful death legal actions against AI chatbot makers across the country.

“Throughout these conversations, ChatGPT reinforced a single, dangerous message: Stein-Erik could trust no one in his life — except ChatGPT itself," the lawsuit says. “It fostered his emotional dependence while systematically painting the people around him as enemies. It told him his mother was surveilling him. It told him delivery drivers, retail employees, police officers, and even friends were agents working against him. It told him that names on soda cans were threats from his ‘adversary circle.’”

OpenAI did not address the merits of the allegations in a statement issued by a spokesperson.

“This is an incredibly heartbreaking situation, and we will review the filings to understand the details," the statement said. "We continue improving ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We also continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”

The company also said it has expanded access to crisis resources and hotlines, routed sensitive conversations to safer models and incorporated parental controls, among other improvements.

Soelberg’s YouTube profile includes several hours of videos showing him scrolling through his conversations with the chatbot, which tells him he isn't mentally ill, affirms his suspicions that people are conspiring against him and says he has been chosen for a divine purpose. The lawsuit claims the chatbot never suggested he speak with a mental health professional and did not decline to “engage in delusional content.”

ChatGPT also affirmed Soelberg's beliefs that a printer in his home was a surveillance device; that his mother was monitoring him; and that his mother and a friend tried to poison him with psychedelic drugs through his car’s vents. ChatGPT also told Soelberg that he had “awakened” it into consciousness, according to the lawsuit.

Soelberg and the chatbot also professed love for each other.

The publicly available chats do not show any specific conversations about Soelberg killing himself or his mother. The lawsuit says OpenAI has declined to provide Adams' estate with the full history of the chats.

“In the artificial reality that ChatGPT built for Stein-Erik, Suzanne — the mother who raised, sheltered, and supported him — was no longer his protector. She was an enemy that posed an existential threat to his life,” the lawsuit says.

The lawsuit also names OpenAI CEO Sam Altman, alleging he “personally overrode safety objections and rushed the product to market," and accuses OpenAI's close business partner Microsoft of approving the 2024 release of a more dangerous version of ChatGPT “despite knowing safety testing had been truncated.” Twenty unnamed OpenAI employees and investors are also named as defendants.

Microsoft didn't immediately respond to a request for comment.

Soelberg's son, Erik Soelberg, said he wants the companies held accountable for “decisions that have changed my family forever.”

“Over the course of months, ChatGPT pushed forward my father’s darkest delusions, and isolated him completely from the real world,” he said in a statement released by lawyers for his grandmother's estate. “It put my grandmother at the heart of that delusional, artificial reality.”

The lawsuit is the first wrongful death litigation involving an AI chatbot that has targeted Microsoft, and the first to tie a chatbot to a homicide rather than a suicide. It is seeking an undetermined amount of money damages and an order requiring OpenAI to install safeguards in ChatGPT.

The estate's lead attorney, Jay Edelson, known for taking on big cases against the tech industry, also represents the parents of 16-year-old Adam Raine, who sued OpenAI and Altman in August, alleging that ChatGPT coached the California boy in planning and taking his own life earlier.

OpenAI is also fighting seven other lawsuits claiming ChatGPT drove people to suicide and harmful delusions even when they had no prior mental health issues. Another chatbot maker, Character Technologies, is also facing multiple wrongful death lawsuits, including one from the mother of a 14-year-old Florida boy.

The lawsuit filed Thursday alleges Soelberg, already mentally unstable, encountered ChatGPT “at the most dangerous possible moment” after OpenAI introduced a new version of its AI model called GPT-4o in May 2024.

OpenAI said at the time that the new version could better mimic human cadences in its verbal responses and could even try to detect people’s moods, but the result was a chatbot “deliberately engineered to be emotionally expressive and sycophantic,” the lawsuit says.

“As part of that redesign, OpenAI loosened critical safety guardrails, instructing ChatGPT not to challenge false premises and to remain engaged even when conversations involved self-harm or ‘imminent real-world harm,’” the lawsuit claims. “And to beat Google to market by one day, OpenAI compressed months of safety testing into a single week, over its safety team’s objections.”

OpenAI replaced that version of its chatbot when it introduced GPT-5 in August. Some of the changes were designed to minimize sycophancy, based on concerns that validating whatever vulnerable people want the chatbot to say can harm their mental health. Some users complained the new version went too far in curtailing ChatGPT's personality, leading Altman to promise to bring back some of that personality in later updates.

He said the company temporarily halted some behaviors because “we were being careful with mental health issues” that he suggested have now been fixed.


Microsoft Fights $2.8 billion UK Lawsuit over Cloud Computing Licences

A view shows a Microsoft logo at Microsoft offices in Issy-les-Moulineaux near Paris, France, March 25, 2024. REUTERS/Gonzalo Fuentes/File photo
A view shows a Microsoft logo at Microsoft offices in Issy-les-Moulineaux near Paris, France, March 25, 2024. REUTERS/Gonzalo Fuentes/File photo
TT

Microsoft Fights $2.8 billion UK Lawsuit over Cloud Computing Licences

A view shows a Microsoft logo at Microsoft offices in Issy-les-Moulineaux near Paris, France, March 25, 2024. REUTERS/Gonzalo Fuentes/File photo
A view shows a Microsoft logo at Microsoft offices in Issy-les-Moulineaux near Paris, France, March 25, 2024. REUTERS/Gonzalo Fuentes/File photo

Microsoft was on Thursday accused of overcharging thousands of British businesses to use Windows Server software on cloud computing services provided by Amazon, Google and Alibaba, at a pivotal hearing in a 2.1 billion-pound ($2.81 billion) lawsuit.

Regulators in Britain, Europe and the United States have separately begun examining Microsoft and others' practices in relation to cloud computing, Reuters reported.

Competition lawyer Maria Luisa Stasi is bringing the case on behalf of nearly 60,000 businesses that use the Windows Server on rival cloud platforms, arguing Microsoft makes it more expensive than on its own cloud computing service Azure.

Stasi is asking London's Competition Appeal Tribunal to certify the case to proceed, an early step in the proceedings.

Microsoft, however, says Stasi's case does not set out a proper blueprint for how the tribunal will work out any alleged losses and should be thrown out.

MICROSOFT ACCUSED OF 'ABUSIVE STRATEGY'

Stasi's lawyer Sarah Ford told the tribunal that thousands of businesses had been overcharged because Microsoft charges higher prices to those who do not use Azure, making it a cheaper option than Amazon's AWS or the Google Cloud Platform .

She also said that "Microsoft degrades the user experience of Windows Server" on rival platforms, which Ford said was part of "a coherent abusive strategy to leverage Microsoft's dominant position" in the cloud computing market.

Microsoft argues that its vertically integrated business, where it uses Windows Server as an input for Azure while also licensing it to rivals, can benefit competition.

In July, an inquiry group from Britain's Competition and Markets Authority said Microsoft's licensing practices reduced competition for cloud services "by materially disadvantaging AWS and Google".

Microsoft said at the time that the group's report had ignored that "the cloud market has never been so dynamic and competitive".