OpenAI Forms Safety Committee as it Starts Training Latest Artificial Intelligence Model

FILE - The OpenAI logo is seen displayed on a cell phone with an image on a computer monitor generated by ChatGPT’s Dall-E text-to-image model, Friday, Dec. 8, 2023, in Boston. OpenAI says it’s setting up a new safety and security committee and has begun training a new artificial intelligence model to supplant the GPT-4 system that underpins its ChatGPT chatbot. The San Francisco startup said in a blog post Tuesday May 28, 2024 that the committee will advise the full board on “critical safety and security decisions” for its projects and operations. (AP Photo/Michael Dwyer, File)
FILE - The OpenAI logo is seen displayed on a cell phone with an image on a computer monitor generated by ChatGPT’s Dall-E text-to-image model, Friday, Dec. 8, 2023, in Boston. OpenAI says it’s setting up a new safety and security committee and has begun training a new artificial intelligence model to supplant the GPT-4 system that underpins its ChatGPT chatbot. The San Francisco startup said in a blog post Tuesday May 28, 2024 that the committee will advise the full board on “critical safety and security decisions” for its projects and operations. (AP Photo/Michael Dwyer, File)
TT

OpenAI Forms Safety Committee as it Starts Training Latest Artificial Intelligence Model

FILE - The OpenAI logo is seen displayed on a cell phone with an image on a computer monitor generated by ChatGPT’s Dall-E text-to-image model, Friday, Dec. 8, 2023, in Boston. OpenAI says it’s setting up a new safety and security committee and has begun training a new artificial intelligence model to supplant the GPT-4 system that underpins its ChatGPT chatbot. The San Francisco startup said in a blog post Tuesday May 28, 2024 that the committee will advise the full board on “critical safety and security decisions” for its projects and operations. (AP Photo/Michael Dwyer, File)
FILE - The OpenAI logo is seen displayed on a cell phone with an image on a computer monitor generated by ChatGPT’s Dall-E text-to-image model, Friday, Dec. 8, 2023, in Boston. OpenAI says it’s setting up a new safety and security committee and has begun training a new artificial intelligence model to supplant the GPT-4 system that underpins its ChatGPT chatbot. The San Francisco startup said in a blog post Tuesday May 28, 2024 that the committee will advise the full board on “critical safety and security decisions” for its projects and operations. (AP Photo/Michael Dwyer, File)

OpenAI says it's setting up a safety and security committee and has begun training a new AI model to supplant the GPT-4 system that underpins its ChatGPT chatbot.

The San Francisco startup said in a blog post Tuesday that the committee will advise the full board on “critical safety and security decisions" for its projects and operations.

The safety committee arrives as debate swirls around AI safety at the company, which was thrust into the spotlight after a researcher, Jan Leike, resigned and leveled criticism at OpenAI for letting safety “take a backseat to shiny products." OpenAI co-founder and chief scientist Ilya Sutskever also resigned, and the company disbanded the “superalignment” team focused on AI risks that they jointly led, The AP reported.

OpenAI said it has “recently begun training its next frontier model” and its AI models lead the industry on capability and safety, though it made no mention of the controversy. “We welcome a robust debate at this important moment,” the company said.

AI models are prediction systems that are trained on vast datasets to generate on-demand text, images, video and human-like conversation. Frontier models are the most powerful, cutting edge AI systems.

The safety committee is filled with company insiders, including OpenAI CEO Sam Altman and Chairman Bret Taylor, and four OpenAI technical and policy experts. It also includes board members Adam D’Angelo, who’s the CEO of Quora, and Nicole Seligman, a former Sony general counsel.

The committee's first job will be to evaluate and further develop OpenAI’s processes and safeguards and make its recommendations to the board in 90 days. The company said it will then publicly release the recommendations it's adopting “in a manner that is consistent with safety and security.”



Nations Building Their Own AI Models Add to Nvidia's Growing Chip Demand

FILE PHOTO: AI (Artificial Intelligence) letters and robot hand miniature in this illustration, taken June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: AI (Artificial Intelligence) letters and robot hand miniature in this illustration, taken June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo
TT

Nations Building Their Own AI Models Add to Nvidia's Growing Chip Demand

FILE PHOTO: AI (Artificial Intelligence) letters and robot hand miniature in this illustration, taken June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: AI (Artificial Intelligence) letters and robot hand miniature in this illustration, taken June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

Nations building artificial intelligence models in their own languages are turning to Nvidia's chips, adding to already booming demand as generative AI takes center stage for businesses and governments, a senior executive said on Wednesday.
Nvidia's third-quarter forecast for rising sales of its chips that power AI technology such as OpenAI's ChatGPT failed to meet investors' towering expectations. But the company described new customers coming from around the world, including governments that are now seeking their own AI models and the hardware to support them, Reuters said.
Countries adopting their own AI applications and models will contribute about low double-digit billions to Nvidia's revenue in the financial year ending in January 2025, Chief Financial Officer Colette Kress said on a call with analysts after Nvidia's earnings report.
That's up from an earlier forecast of such sales contributing high single-digit billions to total revenue. Nvidia forecast about $32.5 billion in total revenue in the third quarter ending in October.
"Countries around the world (desire) to have their own generative AI that would be able to incorporate their own language, incorporate their own culture, incorporate their own data in that country," Kress said, describing AI expertise and infrastructure as "national imperatives."
She offered the example of Japan's National Institute of Advanced Industrial Science and Technology, which is building an AI supercomputer featuring thousands of Nvidia H200 graphics processors.
Governments are also turning to AI as a measure to strengthen national security.
"AI models are trained on data and for political entities -particularly nations - their data are secret and their models need to be customized to their unique political, economic, cultural, and scientific needs," said IDC computing semiconductors analyst Shane Rau.
"Therefore, they need to have their own AI models and a custom underlying arrangement of hardware and software."
Washington tightened its controls on exports of cutting-edge chips to China in 2023 as it sought to prevent breakthroughs in AI that would aid China's military, hampering Nvidia's sales in the region.
Businesses have been working to tap into government pushes to build AI platforms in regional languages.
IBM said in May that Saudi Arabia's Data and Artificial Intelligence Authority would train its "ALLaM" Arabic language model using the company's AI platform Watsonx.
Nations that want to create their own AI models can drive growth opportunities for Nvidia's GPUs, on top of the significant investments in the company's hardware from large cloud providers like Microsoft, said Bob O'Donnell, chief analyst at TECHnalysis Research.