As AI Gains a Workplace Foothold, States are Trying to Make Sure Workers Don't Get Left Behind

Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. (Reuters)
Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. (Reuters)
TT

As AI Gains a Workplace Foothold, States are Trying to Make Sure Workers Don't Get Left Behind

Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. (Reuters)
Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. (Reuters)

With many jobs expected to eventually rely on generative artificial intelligence, states are trying to help workers beef up their tech skills before they become outdated and get outfoxed by machines that are becoming increasingly smarter.
Connecticut is working to create what proponents believe will be the country's first Citizens AI Academy, a free online repository of curated classes that users can take to learn basic skills or obtain a certificate needed for employment, The Associated Press said.
“This is a rapidly evolving area," said state Democratic Sen. James Maroney. "So we need to all learn what are the best sources for staying current. How can we update our skills? Who can be trusted sources?”
Determining what skills are necessary in an AI world can be a challenge for state legislators given the fast-moving nature of the technology and differing opinions about what approach is best.
Gregory LaBlanc, professor of Finance, Strategy and Law at the Haas School of Business at Berkeley Law School in California, says workers should be taught how to use and manage generative AI rather than how the technology works, partly because computers will soon be better able to perform certain tasks previously performed by humans.
“What we need is to lean into things that complement AI as opposed to learning to be really bad imitators of AI," he said. “We need to figure out what is AI not good at and then teach those things. And those things are generally things like creativity, empathy, high level problem solving.”
He said historically people have not needed to understand technological advancements in order for them to succeed.
“When electricity came along, we didn’t tell everybody that they needed to become electrical engineers,” LeBlanc said.
This year, at least four states — Connecticut, California, Mississippi and Maryland — proposed legislation that attempted to deal with AI in the classroom somehow. They ranged from Connecticut's planned AI Academy, which was originally included in a wide-ranging AI regulation bill that failed but the concept is still being developed by state education officials, to proposed working groups that examine how AI can be incorporated safely in public schools. Such a bill died in the Mississippi legislature while the others remain in flux.
One bill in California would require a state working group to consider incorporating AI literacy skills into math, science, history and social science curriculums.
“AI has the potential to positively impact the way we live, but only if we know how to use it, and use it responsibly,” said the bill's author, Assemblymember Marc Berman, in a statement. “No matter their future profession, we must ensure that all students understand basic AI principles and applications, that they have the skills to recognize when AI is employed, and are aware of AI’s implications, limitations, and ethical considerations."
The bill is backed by the California Chamber of Commerce. CalChamber Policy Advocate Ronak Daylami said in a statement that incorporating information into existing school curricula will “dispel the stigma and mystique of the technology, not only helping students become more discerning and intentional users and consumers of AI, but also better positioning future generations of workers to succeed in an AI-driven workforce and hopefully inspiring the next generation of computer scientists.”
While Connecticut's planned AI Academy is expected to offer certificates to people who complete certain skills programs that might be needed for careers, Maroney said the academy will also include the basics, from digital literacy to how to pose questions to a chatbot.
He said it's important for people to have the skills to understand, evaluate and effectively interact with AI technologies, whether it’s a chatbot or machines that learn to identify problems and make decisions that mimic human decision-making.
“Most jobs are going to require some form of literacy,” Maroney said. “I think that if you aren’t learning how to use it, you’ll be at a disadvantage."
A September 2023 study released by the job-search company Indeed found all US jobs listed on the platform had skills that could be performed or augmented by generative AI. Nearly 20% of the jobs were considered “highly exposed,” which means the technology is considered good or excellent at 80% or more of the skills that were mentioned in the Indeed job listings.
Nearly 46% of the jobs on the platform were “moderately exposed,” which means the GenAI can perform 50% to 80% of the skills.
Maroney said he is concerned how that skills gap — coupled with a lack of access to high-speed internet, computers and smart phones in some underserved communities — will exacerbate the inequity problem.
A report released in February from McKinsey and Company, a global management consulting firm, projected that generative AI could increase household wealth in the US by nearly $500 billion by 2045, but it would also increase the wealth gap between Black and white households by $43 billion annually.
Advocates have been working for years to narrow the nation’s digital skills gap, often focusing on the basics of computer literacy and improving access to reliable internet and devices, especially for people living in urban and rural areas. The advent of AI brings additional challenges to that task, said Marvin Venay, chief external affairs and advocacy officer for the Massachusetts-based organization Bring Tech Home.
“Education must be included in order for this to really take off publicly ... in a manner which is going to give people the ability to eliminate their barriers,” he said of AI. “And it has to be able to explain to the most common individual why it is not only a useful tool, but why this tool will be something that can be trusted.”
Tesha Tramontano-Kelly, executive director of the Connecticut-based group CfAL for Digital Inclusion, said she worries lawmakers are “putting the cart before the horse” when it comes to talking about AI training. Ninety percent of the youths and adults who use her organization's free digital literacy classes don't have a computer in the home.
While Connecticut is considered technologically advanced compared to many other states and nearly every household can get internet service, a recent state digital equity study found only about three-quarters subscribe to broadband. A survey conducted as part of the study found 47% of respondents find it somewhat or very difficult to afford internet service.
Of residents who reported household income at or below 150% of the federal poverty level, 32% don't own a computer and 13% don't own any internet enabled device.
Tramontano-Kelly said ensuring the internet is accessible and technology equipment is affordable are important first steps.
“So teaching people about AI is super important. I 100% agree with this,” she said. “But the conversation also needs to be about everything else that goes along with AI."



Nvidia, Joining Big Tech Deal Spree, to License Groq Technology, Hire Executives

The Nvidia logo is seen on a graphic card package in this illustration created on August 19, 2025. (Reuters)
The Nvidia logo is seen on a graphic card package in this illustration created on August 19, 2025. (Reuters)
TT

Nvidia, Joining Big Tech Deal Spree, to License Groq Technology, Hire Executives

The Nvidia logo is seen on a graphic card package in this illustration created on August 19, 2025. (Reuters)
The Nvidia logo is seen on a graphic card package in this illustration created on August 19, 2025. (Reuters)

Nvidia has agreed to license chip technology from startup Groq and hire away its CEO, a veteran of Alphabet's Google, Groq said in a blog post on Wednesday.

The deal follows a familiar pattern in recent years where the world's biggest technology firms pay large sums in deals with promising startups to take their technology and talent but stop short of formally acquiring the target.

Groq specializes in what is known as inference, where artificial intelligence models that have already been trained respond to requests from users. While Nvidia dominates the market for training AI models, it faces much more competition in inference, where traditional rivals such as Advanced Micro Devices have aimed ‌to challenge it ‌as well as startups such as Groq and Cerebras Systems.

Nvidia ‌has ⁠agreed to a "non-exclusive" ‌license to Groq's technology, Groq said. It said its founder Jonathan Ross, who helped Google start its AI chip program, as well as Groq President Sunny Madra and other members of its engineering team, will join Nvidia.

A person close to Nvidia confirmed the licensing agreement.

Groq did not disclose financial details of the deal. CNBC reported that Nvidia had agreed to acquire Groq for $20 billion in cash, but neither Nvidia nor Groq commented on the report. Groq said in its blog post that it will continue to ⁠operate as an independent company with Simon Edwards as CEO and that its cloud business will continue operating.

In similar recent deals, Microsoft's ‌top AI executive came through a $650 million deal with a startup ‍that was billed as a licensing fee, and ‍Meta spent $15 billion to hire Scale AI's CEO without acquiring the entire firm. Amazon hired ‍away founders from Adept AI, and Nvidia did a similar deal this year. The deals have faced scrutiny by regulators, though none has yet been unwound.

"Antitrust would seem to be the primary risk here, though structuring the deal as a non-exclusive license may keep the fiction of competition alive (even as Groq’s leadership and, we would presume, technical talent move over to Nvidia)," Bernstein analyst Stacy Rasgon wrote in a note to clients on Wednesday after Groq's announcement. And Nvidia CEO Jensen Huang's "relationship with ⁠the Trump administration appears among the strongest of the key US tech companies."

Groq more than doubled its valuation to $6.9 billion from $2.8 billion in August last year, following a $750 million funding round in September.

Groq is one of a number of upstarts that do not use external high-bandwidth memory chips, freeing them from the memory crunch affecting the global chip industry. The approach, which uses a form of on-chip memory called SRAM, helps speed up interactions with chatbots and other AI models but also limits the size of the model that can be served.

Groq's primary rival in the approach is Cerebras Systems, which Reuters this month reported plans to go public as soon as next year. Groq and Cerebras have signed large deals in the Middle East.

Nvidia's Huang spent much of his biggest keynote speech of 2025 arguing that ‌Nvidia would be able to maintain its lead as AI markets shift from training to inference.


Italy Watchdog Orders Meta to Halt WhatsApp Terms Barring Rival AI Chatbots

The logo of Meta is seen at Porte de Versailles exhibition center in Paris, France, June 11, 2025. (Reuters)
The logo of Meta is seen at Porte de Versailles exhibition center in Paris, France, June 11, 2025. (Reuters)
TT

Italy Watchdog Orders Meta to Halt WhatsApp Terms Barring Rival AI Chatbots

The logo of Meta is seen at Porte de Versailles exhibition center in Paris, France, June 11, 2025. (Reuters)
The logo of Meta is seen at Porte de Versailles exhibition center in Paris, France, June 11, 2025. (Reuters)

Italy's antitrust authority (AGCM) on Wednesday ordered Meta Platforms to suspend contractual terms ​that could shut rival AI chatbots out of WhatsApp, as it investigates the US tech group for suspected abuse of a dominant position.

A spokesperson for Meta called the decision "fundamentally flawed," and said the emergence of AI chatbots "put a strain on our systems that ‌they were ‌not designed to support".

"We ‌will ⁠appeal," ​the ‌spokesperson added.

The move is the latest in a string by European regulators against Big Tech firms, as the EU seeks to balance support for the sector with efforts to curb its expanding influence.

Meta's conduct appeared capable of restricting "output, market ⁠access or technical development in the AI chatbot services market", ‌potentially harming consumers, AGCM ‍said.

In July, the ‍Italian regulator opened the investigation into Meta over ‍the suspected abuse of a dominant position related to WhatsApp. It widened the probe in November to cover updated terms for the messaging app's business ​platform.

"These contractual conditions completely exclude Meta AI's competitors in the AI chatbot services ⁠market from the WhatsApp platform," the watchdog said.

EU antitrust regulators launched a parallel investigation into Meta last month over the same allegations.

Europe's tough stance - a marked contrast to more lenient US regulation - has sparked industry pushback, particularly by US tech titans, and led to criticism from the administration of US President Donald Trump.

The Italian watchdog said it was coordinating with the European ‌Commission to ensure Meta's conduct was addressed "in the most effective manner".


Amazon Says Blocked 1,800 North Koreans from Applying for Jobs

Amazon logo (Reuters)
Amazon logo (Reuters)
TT

Amazon Says Blocked 1,800 North Koreans from Applying for Jobs

Amazon logo (Reuters)
Amazon logo (Reuters)

US tech giant Amazon said it has blocked over 1,800 North Koreans from joining the company, as Pyongyang sends large numbers of IT workers overseas to earn and launder funds.

In a post on LinkedIn, Amazon's Chief Security Officer Stephen Schmidt said last week that North Korean workers had been "attempting to secure remote IT jobs with companies worldwide, particularly in the US".

He said the firm had seen nearly a one-third rise in applications by North Koreans in the past year, reported AFP.

The North Koreans typically use "laptop farms" -- a computer in the United States operated remotely from outside the country, he said.

He warned the problem wasn't specific to Amazon and "is likely happening at scale across the industry".

Tell-tale signs of North Korean workers, Schmidt said, included wrongly formatted phone numbers and dodgy academic credentials.

In July, a woman in Arizona was sentenced to more than eight years in prison for running a laptop farm helping North Korean IT workers secure remote jobs at more than 300 US companies.

The scheme generated more than $17 million in revenue for her and North Korea, officials said.

Last year, Seoul's intelligence agency warned that North Korean operatives had used LinkedIn to pose as recruiters and approach South Koreans working at defense firms to obtain information on their technologies.

"North Korea is actively training cyber personnel and infiltrating key locations worldwide," Hong Min, an analyst at the Korea Institute for National Unification, told AFP.

"Given Amazon's business nature, the motive seems largely economic, with a high likelihood that the operation was planned to steal financial assets," he added.

North Korea's cyber-warfare program dates back to at least the mid-1990s.

It has since grown into a 6,000-strong cyber unit known as Bureau 121, which operates from several countries, according to a 2020 US military report.

In November, Washington announced sanctions on eight individuals accused of being "state-sponsored hackers", whose illicit operations were conducted "to fund the regime's nuclear weapons program" by stealing and laundering money.

The US Department of the Treasury has accused North Korea-affiliated cybercriminals of stealing over $3 billion over the past three years, primarily in cryptocurrency.