Intel Just Rewired the Chip and the Rules of Artificial Intelligence

Intel introduced PowerVia, a design shift the company calls nothing less than a revolution. Photo: Intel
Intel introduced PowerVia, a design shift the company calls nothing less than a revolution. Photo: Intel
TT

Intel Just Rewired the Chip and the Rules of Artificial Intelligence

Intel introduced PowerVia, a design shift the company calls nothing less than a revolution. Photo: Intel
Intel introduced PowerVia, a design shift the company calls nothing less than a revolution. Photo: Intel

In the blistering heat of the Arizona desert, Intel staged a quiet revolution. At the Intel Technology Tour 2025 in Phoenix, the company didn’t just unveil new processors. It revealed a plan to rebuild the foundations of computing itself.

This wasn’t a spec-sheet update. It was the kind of pivot that comes along once in a generation, one that could rewrite how artificial intelligence is powered, trained, and trusted.

At this invite-only event, where Asharq Al-Awsat was the sole Arabic media presence from the Middle East, Intel showed off technologies that don’t merely shrink transistors but re-imagine how electricity and intelligence flow through silicon.

The Day Power Flipped
“For the first time in semiconductor history, we’re moving power delivery to the backside of the chip,” said James Johnson, Intel’s senior vice president and head of client computing, as he introduced PowerVia, a design shift the company calls nothing less than a revolution.

He wasn’t exaggerating. Instead of channelling energy through the maze of wires on top of a processor, PowerVia feeds it directly from behind, shorter paths, less resistance, fewer losses. The result: chips that run 30 percent more efficiently and 10 percent denser than before.

Paired with Intel’s new 2-nanometer RibbonFET transistors, the technology anchors Intel’s audacious roadmap: “Five nodes in four years.” By 2026, the company wants to reclaim the lead it ceded to TSMC and Samsung in advanced manufacturing.

“What we’re seeing,” said Stephen Robinson, one of Intel’s senior fellows, “is an unprecedented convergence between architectural innovation and manufacturing maturity.”

In other words, it’s not just about how small the chip gets, it’s about how smart it becomes.

Beyond the Shrink
For decades, the semiconductor race was about scale: who could pack more transistors into less space. But Robinson insists the game has changed.

“It’s no longer about shrinking the transistor,” he told Asharq Al-Awsat. “It’s about rethinking how every element works together to reach efficiencies no one’s seen before.”

Intel calls this philosophy System Technology Co-Optimization, or STCO. It’s engineering meets orchestration: physics, logic, and AI co-designed in a single loop. Think of it as turning the chip into a living ecosystem, not a static piece of silicon.

Robinson calls this moment a “once-in-a-lifetime opportunity” for the industry, a rare alignment of physics, data, and human ingenuity.

The AI Inside Everything
If the chip is the body, then AI is the brain now wired into it.

According to Thomas Petersen, Intel’s senior fellow for architecture and graphics, the company’s next move is about making every processor think collectively—a symphony of CPU, GPU, and NPU working as one organism.

“We’re designing processors to think together, not separately,” Petersen said.

“The days of each chip doing one job are over.”

The star of this new generation is Panther Lake, Intel’s 2026 platform for the AI PC. By weaving neural processing directly into the CPU, your laptop becomes a stand-alone AI engine, running tasks locally, instantly, and privately without the cloud on constant call.

“The goal isn’t just to get an answer from a smart model,” Petersen said. “It’s to get it instantly, privately, and with minimal energy. That’s the philosophy of the next intelligent computer.”

The shift marks a turning point from “assisted intelligence” to “active intelligence.” The PC won’t just help, it will collaborate. Users will work side-by-side with autonomous AI agents that analyze, plan, and respond in real time.

“We’re building chips that understand the meaning of data,” Petersen said, “not just calculate it.”

When AI Becomes a Colleague
At a session titled Gemini Enterprise AI, Intel described the next stage of enterprise computing: Agentic AI, systems that don’t just support humans but work alongside them.

“AI is no longer a tool,” said one speaker. “It’s a co-worker.”

Intel’s idea of Agentic Work Environments envisions teams of human employees and AI agents collaborating, making decisions, and even negotiating outcomes within secure, governed frameworks. The glue that holds it all together? Trust—not as a software patch, but as hardware architecture.

“Autonomous agents can behave unpredictably,” said an Intel security engineer. “That’s why trust must live in the silicon itself.”

To enforce that trust, Intel upgraded its Trusted Execution Environment (TEE) and hardware isolation systems, ensuring that AI models run inside encrypted, quarantined zones. In an era where synthetic content and model-to-model interaction are exploding, Intel sees this as the first line of defence in the new AI frontier.

Hyper-Connectivity: The Nervous System of AI
Fast intelligence is meaningless without fast connection.

At the “Wireless Innovations” session, Intel engineers previewed Wi-Fi 8, 5G Advanced, and early glimpses of 6G. It is a future where every connected device becomes a mini data center, processing information locally with near-zero latency.

“The edge,” said one network architect, “is the new frontier for AI. The next models won’t just live in the cloud; they’ll live in the world around us.”

That world includes the Middle East. From NEOM’s digital twins to autonomous transport grids across Saudi Arabia and the UAE, the region’s smart-city projects rely on the kind of ultra-low latency and reliability Intel is building into its chipsets and modems.

The New Metric: Sustainability
Even in a week obsessed with speed, sustainability was the quiet headline.
“Efficiency isn’t just performance per watt,” said Tim Wilson, Intel’s vice president of design engineering. “It’s responsibility per watt.”

Intel now recycles over 95 percent of its water, pursues zero-waste fabs, and designs chips that literally waste less power inside themselves. PowerVia doesn’t just make circuits cleaner, it makes computing greener.

“In the age of AI,” Wilson said, “sustainability isn’t optional. It’s a design constraint.”

That ethos mirrors the Middle East’s own goals: energy-efficient cities, renewable-powered data centers, and carbon-neutral digital growth under Saudi Vision 2030 and the UAE’s Net Zero agenda.

A New Connection with the Middle East
Though Phoenix was the stage, the conversation kept circling back to the Gulf.
Saudi Arabia is investing billions into AI, cloud infrastructure, and sovereign data centers laying the groundwork for a future semiconductor industry of its own. Intel, sensing the region’s momentum, has begun collaborating with Gulf universities and research labs on chip design and AI engineering.

A senior Intel official confirmed ongoing talks with sovereign wealth funds on potential partnerships for advanced packaging and local manufacturing projects.

The subtext: the Middle East isn’t a spectator in the AI race, it’s a stakeholder.

Making AI for Everyone
Perhaps the most radical idea at Phoenix wasn’t technical, it was social.

Intel wants to democratize AI. Through its Gaudi3 and Gaudi4 accelerators, the company is offering a low-cost alternative for training massive models up to 50 percent cheaper than rival platforms.

“AI shouldn’t be a luxury item,” an Intel executive said. “It should be like electricity, accessible, reliable, and sustainable.”

That principle could reshape emerging tech ecosystems, especially in places like Saudi Arabia, where national AI strategies hinge on local innovation. Affordable compute means universities and startups can train their own models, rather than rent power from global giants, a leap toward digital sovereignty.

The Hidden Infrastructure of Trust
As AI grows more autonomous, the question isn’t what it can do, it’s who decides what it should do.

Intel’s answer lies deep in the chip’s DNA.

“We used to protect data,” one Intel researcher told Asharq Al-Awsat. “Now we protect behavior. When models can make decisions, you need silicon that understands trust.”

The company is developing digital IDs for AI agents, encrypted model training, and physical data isolation layers, technologies increasingly vital for sectors like defence, energy, and finance.

In the Gulf, this vision echoes work by SDAIA, Saudi Arabia’s Data and AI Authority, which is crafting a national framework for AI governance and safety.

Both share the same core belief: trust isn’t a checkbox; it’s an engineering discipline.

A Legacy Reinvented
By the end of the Phoenix tour, one thing was clear: Intel isn’t just trying to win the AI race. It’s trying to redefine what leadership looks like in an era where machines think, learn, and act.

Intel sees itself as “the custodian of computing’s evolution” the thread connecting the first microprocessor to the age of autonomous intelligence.

“We stand at the intersection of physics, logic, and imagination,” Robinson said in his closing remarks. “That’s where the future of intelligence, human and artificial, truly lies.”

Petersen added a line that could have come straight from Wired’s own manifesto:

“The future of AI is too big to be locked behind closed walls. Our role is to empower everyone, from startups to governments to build on our technology.”



Foxconn to Invest $510 Million in Kaohsiung Headquarters in Taiwan

Construction is scheduled to start in 2027, with completion targeted for 2033. Reuters
Construction is scheduled to start in 2027, with completion targeted for 2033. Reuters
TT

Foxconn to Invest $510 Million in Kaohsiung Headquarters in Taiwan

Construction is scheduled to start in 2027, with completion targeted for 2033. Reuters
Construction is scheduled to start in 2027, with completion targeted for 2033. Reuters

Foxconn, the world’s largest contract electronics maker, said on Friday it will invest T$15.9 billion ($509.94 million) to build its Kaohsiung headquarters in southern Taiwan.

That would include a mixed-use commercial and office building and a residential tower, it said. Construction is scheduled to start in 2027, with completion targeted for 2033.

Foxconn said the headquarters will serve as an important hub linking its operations across southern Taiwan, and once completed will house its smart-city team, software R&D teams, battery-cell R&D teams, EV technology development center and AI application software teams.

The Kaohsiung city government said Foxconn’s investments in the city have totaled T$25 billion ($801.8 million) over the past three years.


Open AI, Microsoft Face Lawsuit Over ChatGPT's Alleged Role in Connecticut Murder-Suicide

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
TT

Open AI, Microsoft Face Lawsuit Over ChatGPT's Alleged Role in Connecticut Murder-Suicide

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)

The heirs of an 83-year-old Connecticut woman are suing ChatGPT maker OpenAI and its business partner Microsoft for wrongful death, alleging that the artificial intelligence chatbot intensified her son's “paranoid delusions” and helped direct them at his mother before he killed her.

Police said Stein-Erik Soelberg, 56, a former tech industry worker, fatally beat and strangled his mother, Suzanne Adams, and killed himself in early August at the home where they both lived in Greenwich, Connecticut, The AP news reported.

The lawsuit filed by Adams' estate on Thursday in California Superior Court in San Francisco alleges OpenAI “designed and distributed a defective product that validated a user’s paranoid delusions about his own mother.” It is one of a growing number of wrongful death legal actions against AI chatbot makers across the country.

“Throughout these conversations, ChatGPT reinforced a single, dangerous message: Stein-Erik could trust no one in his life — except ChatGPT itself," the lawsuit says. “It fostered his emotional dependence while systematically painting the people around him as enemies. It told him his mother was surveilling him. It told him delivery drivers, retail employees, police officers, and even friends were agents working against him. It told him that names on soda cans were threats from his ‘adversary circle.’”

OpenAI did not address the merits of the allegations in a statement issued by a spokesperson.

“This is an incredibly heartbreaking situation, and we will review the filings to understand the details," the statement said. "We continue improving ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We also continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”

The company also said it has expanded access to crisis resources and hotlines, routed sensitive conversations to safer models and incorporated parental controls, among other improvements.

Soelberg’s YouTube profile includes several hours of videos showing him scrolling through his conversations with the chatbot, which tells him he isn't mentally ill, affirms his suspicions that people are conspiring against him and says he has been chosen for a divine purpose. The lawsuit claims the chatbot never suggested he speak with a mental health professional and did not decline to “engage in delusional content.”

ChatGPT also affirmed Soelberg's beliefs that a printer in his home was a surveillance device; that his mother was monitoring him; and that his mother and a friend tried to poison him with psychedelic drugs through his car’s vents. ChatGPT also told Soelberg that he had “awakened” it into consciousness, according to the lawsuit.

Soelberg and the chatbot also professed love for each other.

The publicly available chats do not show any specific conversations about Soelberg killing himself or his mother. The lawsuit says OpenAI has declined to provide Adams' estate with the full history of the chats.

“In the artificial reality that ChatGPT built for Stein-Erik, Suzanne — the mother who raised, sheltered, and supported him — was no longer his protector. She was an enemy that posed an existential threat to his life,” the lawsuit says.

The lawsuit also names OpenAI CEO Sam Altman, alleging he “personally overrode safety objections and rushed the product to market," and accuses OpenAI's close business partner Microsoft of approving the 2024 release of a more dangerous version of ChatGPT “despite knowing safety testing had been truncated.” Twenty unnamed OpenAI employees and investors are also named as defendants.

Microsoft didn't immediately respond to a request for comment.

Soelberg's son, Erik Soelberg, said he wants the companies held accountable for “decisions that have changed my family forever.”

“Over the course of months, ChatGPT pushed forward my father’s darkest delusions, and isolated him completely from the real world,” he said in a statement released by lawyers for his grandmother's estate. “It put my grandmother at the heart of that delusional, artificial reality.”

The lawsuit is the first wrongful death litigation involving an AI chatbot that has targeted Microsoft, and the first to tie a chatbot to a homicide rather than a suicide. It is seeking an undetermined amount of money damages and an order requiring OpenAI to install safeguards in ChatGPT.

The estate's lead attorney, Jay Edelson, known for taking on big cases against the tech industry, also represents the parents of 16-year-old Adam Raine, who sued OpenAI and Altman in August, alleging that ChatGPT coached the California boy in planning and taking his own life earlier.

OpenAI is also fighting seven other lawsuits claiming ChatGPT drove people to suicide and harmful delusions even when they had no prior mental health issues. Another chatbot maker, Character Technologies, is also facing multiple wrongful death lawsuits, including one from the mother of a 14-year-old Florida boy.

The lawsuit filed Thursday alleges Soelberg, already mentally unstable, encountered ChatGPT “at the most dangerous possible moment” after OpenAI introduced a new version of its AI model called GPT-4o in May 2024.

OpenAI said at the time that the new version could better mimic human cadences in its verbal responses and could even try to detect people’s moods, but the result was a chatbot “deliberately engineered to be emotionally expressive and sycophantic,” the lawsuit says.

“As part of that redesign, OpenAI loosened critical safety guardrails, instructing ChatGPT not to challenge false premises and to remain engaged even when conversations involved self-harm or ‘imminent real-world harm,’” the lawsuit claims. “And to beat Google to market by one day, OpenAI compressed months of safety testing into a single week, over its safety team’s objections.”

OpenAI replaced that version of its chatbot when it introduced GPT-5 in August. Some of the changes were designed to minimize sycophancy, based on concerns that validating whatever vulnerable people want the chatbot to say can harm their mental health. Some users complained the new version went too far in curtailing ChatGPT's personality, leading Altman to promise to bring back some of that personality in later updates.

He said the company temporarily halted some behaviors because “we were being careful with mental health issues” that he suggested have now been fixed.


Microsoft Fights $2.8 billion UK Lawsuit over Cloud Computing Licences

A view shows a Microsoft logo at Microsoft offices in Issy-les-Moulineaux near Paris, France, March 25, 2024. REUTERS/Gonzalo Fuentes/File photo
A view shows a Microsoft logo at Microsoft offices in Issy-les-Moulineaux near Paris, France, March 25, 2024. REUTERS/Gonzalo Fuentes/File photo
TT

Microsoft Fights $2.8 billion UK Lawsuit over Cloud Computing Licences

A view shows a Microsoft logo at Microsoft offices in Issy-les-Moulineaux near Paris, France, March 25, 2024. REUTERS/Gonzalo Fuentes/File photo
A view shows a Microsoft logo at Microsoft offices in Issy-les-Moulineaux near Paris, France, March 25, 2024. REUTERS/Gonzalo Fuentes/File photo

Microsoft was on Thursday accused of overcharging thousands of British businesses to use Windows Server software on cloud computing services provided by Amazon, Google and Alibaba, at a pivotal hearing in a 2.1 billion-pound ($2.81 billion) lawsuit.

Regulators in Britain, Europe and the United States have separately begun examining Microsoft and others' practices in relation to cloud computing, Reuters reported.

Competition lawyer Maria Luisa Stasi is bringing the case on behalf of nearly 60,000 businesses that use the Windows Server on rival cloud platforms, arguing Microsoft makes it more expensive than on its own cloud computing service Azure.

Stasi is asking London's Competition Appeal Tribunal to certify the case to proceed, an early step in the proceedings.

Microsoft, however, says Stasi's case does not set out a proper blueprint for how the tribunal will work out any alleged losses and should be thrown out.

MICROSOFT ACCUSED OF 'ABUSIVE STRATEGY'

Stasi's lawyer Sarah Ford told the tribunal that thousands of businesses had been overcharged because Microsoft charges higher prices to those who do not use Azure, making it a cheaper option than Amazon's AWS or the Google Cloud Platform .

She also said that "Microsoft degrades the user experience of Windows Server" on rival platforms, which Ford said was part of "a coherent abusive strategy to leverage Microsoft's dominant position" in the cloud computing market.

Microsoft argues that its vertically integrated business, where it uses Windows Server as an input for Azure while also licensing it to rivals, can benefit competition.

In July, an inquiry group from Britain's Competition and Markets Authority said Microsoft's licensing practices reduced competition for cloud services "by materially disadvantaging AWS and Google".

Microsoft said at the time that the group's report had ignored that "the cloud market has never been so dynamic and competitive".