Intel Says Newest Laptop Chips, Software Will Handle Generative AI

AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken June 23, 2023. (Reuters)
AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken June 23, 2023. (Reuters)
TT
20

Intel Says Newest Laptop Chips, Software Will Handle Generative AI

AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken June 23, 2023. (Reuters)
AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken June 23, 2023. (Reuters)

Intel said on Tuesday that a new chip due in December will be able to run a generative artificial intelligence chatbot on a laptop rather than having to tap into cloud data centers for computing power.

The capability, which Intel was expected to show during a software developer conference held in Silicon Valley, could let businesses and consumers test ChatGPT-style technologies without sending sensitive data off of their own computer. It is made possible by new AI data-crunching features built into Intel's forthcoming "Meteor Lake" laptop chip and from new software tools that the company is releasing.

Intel executives also expect to say that the company is on track to deliver a successor chip called "Arrow Lake" next year, and that Intel's manufacturing technology will rival the best from Taiwan Semiconductor Manufacturing Co, as it has promised. Intel was once the best chip manufacturer, lost the lead, and now says it is on track to return to the front.

Intel has struggled to gain ground against Nvidia in the market for the powerful chips used in data centers to "train" AI systems such as ChatGPT. Intel said Tuesday that it was building a new supercomputer that would be used by Stability AI, a startup that makes image-generating software.

But the market for chips that will handle AI work outside data centers is far less settled, and it is there that Intel aimed to gain ground on Tuesday.

Through a new version of software called OpenVINO, Intel said that developers will be able run a version of a large language model, the class of technology behind products like ChatGPT, made by Meta Platforms on laptops. That will enable faster responses from chatbot and will mean that data does not leave the device.

"You can get a better performance, a lower cost and more private AI," Sachin Katti, senior vice president and general manager of Intel's network and edge group, told Reuters in an interview.

Dan Hutcheson, an analyst with TechInsights, told Reuters that business users who are weary of handing sensitive corporate data over to third-party AI firms might be interested in Intel's approach.

"AI is still in that class of technology where you need a PhD to do it," Hutcheson said. Intel Chief Gelsinger's challenge "is to democratize it. If he can pull that off, and make it so that anyone can use it, that creates a much bigger market for chips – the chips that he makes."



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.