Onsemi Aims to Improve AI Power Efficiency with Silicon Carbide Chips

Onsemi Aims to Improve AI Power Efficiency with Silicon Carbide Chips
TT

Onsemi Aims to Improve AI Power Efficiency with Silicon Carbide Chips

Onsemi Aims to Improve AI Power Efficiency with Silicon Carbide Chips

Onsemi on Wednesday unveiled a lineup of chips designed to make the data centers that power artificial intelligence services more energy efficient by borrowing a technology it already sells for electric vehicles.

Onsemi is one of a handful of suppliers of chips made of silicon carbide, an alternative to standard silicon that is more pricey to manufacture but more efficient at converting power from one form to another. In recent years, silicon carbide has found wide use in electric vehicles, where swapping out the chips between the vehicle's battery and motors can give cars a boost in range, Reuters reported.

Simon Keeton, president of the power solutions group at Onsemi, said that in a typical data center, electricity gets converted at least four times between when it enters the building and when it is ultimately used by a chip to do work. Over the course of those conversions, about 12% of the electricity is lost as heat, Keeton said.

"The companies that are actually using these things - the Amazons and the Googles and the Microsoft - they get double penalized for these losses," Keeton said. "Number one, they're paying for the electricity that gets lost as heat. And then because it gets lost as heat, they're paying for the electricity to then cool" the data center, Keeton said.

Onsemi believes it can reduce those power losses by a full percentage point. While a percentage point does not sound like much, the estimates of how much power AI data centers will consume is staggering, with some groups estimating up to 1,000 terawatt hours in less than two years.

One percent of that total, Keeton said, "is enough to power a million houses for a year. So that puts it into context of how to think about the power levels."



OpenAI, Anthropic Sign Deals with US Govt for AI Research and Testing

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
TT

OpenAI, Anthropic Sign Deals with US Govt for AI Research and Testing

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)

AI startups OpenAI and Anthropic have signed deals with the United States government for research, testing and evaluation of their artificial intelligence models, the US Artificial Intelligence Safety Institute said on Thursday.

The first-of-their-kind agreements come at a time when the companies are facing regulatory scrutiny over safe and ethical use of AI technologies.

California legislators are set to vote on a bill as soon as this week to broadly regulate how AI is developed and deployed in the state.

Under the deals, the US AI Safety Institute will have access to major new models from both OpenAI and Anthropic prior to and following their public release.

The agreements will also enable collaborative research to evaluate capabilities of the AI models and risks associated with them, Reuters reported.

"We believe the institute has a critical role to play in defining US leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on," said Jason Kwon, chief strategy officer at ChatGPT maker OpenAI.

Anthropic, which is backed by Amazon and Alphabet , did not immediately respond to a Reuters request for comment.

"These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI," said Elizabeth Kelly, director of the US AI Safety Institute.

The institute, a part of the US commerce department's National Institute of Standards and Technology (NIST), will also collaborate with the U.K. AI Safety Institute and provide feedback to the companies on potential safety improvements.

The US AI Safety Institute was launched last year as part of an executive order by President Joe Biden's administration to evaluate known and emerging risks of artificial intelligence models.