OpenAI Forms Safety Committee as it Starts Training Latest Artificial Intelligence Model

FILE - The OpenAI logo is seen displayed on a cell phone with an image on a computer monitor generated by ChatGPT’s Dall-E text-to-image model, Friday, Dec. 8, 2023, in Boston. OpenAI says it’s setting up a new safety and security committee and has begun training a new artificial intelligence model to supplant the GPT-4 system that underpins its ChatGPT chatbot. The San Francisco startup said in a blog post Tuesday May 28, 2024 that the committee will advise the full board on “critical safety and security decisions” for its projects and operations. (AP Photo/Michael Dwyer, File)
FILE - The OpenAI logo is seen displayed on a cell phone with an image on a computer monitor generated by ChatGPT’s Dall-E text-to-image model, Friday, Dec. 8, 2023, in Boston. OpenAI says it’s setting up a new safety and security committee and has begun training a new artificial intelligence model to supplant the GPT-4 system that underpins its ChatGPT chatbot. The San Francisco startup said in a blog post Tuesday May 28, 2024 that the committee will advise the full board on “critical safety and security decisions” for its projects and operations. (AP Photo/Michael Dwyer, File)
TT

OpenAI Forms Safety Committee as it Starts Training Latest Artificial Intelligence Model

FILE - The OpenAI logo is seen displayed on a cell phone with an image on a computer monitor generated by ChatGPT’s Dall-E text-to-image model, Friday, Dec. 8, 2023, in Boston. OpenAI says it’s setting up a new safety and security committee and has begun training a new artificial intelligence model to supplant the GPT-4 system that underpins its ChatGPT chatbot. The San Francisco startup said in a blog post Tuesday May 28, 2024 that the committee will advise the full board on “critical safety and security decisions” for its projects and operations. (AP Photo/Michael Dwyer, File)
FILE - The OpenAI logo is seen displayed on a cell phone with an image on a computer monitor generated by ChatGPT’s Dall-E text-to-image model, Friday, Dec. 8, 2023, in Boston. OpenAI says it’s setting up a new safety and security committee and has begun training a new artificial intelligence model to supplant the GPT-4 system that underpins its ChatGPT chatbot. The San Francisco startup said in a blog post Tuesday May 28, 2024 that the committee will advise the full board on “critical safety and security decisions” for its projects and operations. (AP Photo/Michael Dwyer, File)

OpenAI says it's setting up a safety and security committee and has begun training a new AI model to supplant the GPT-4 system that underpins its ChatGPT chatbot.

The San Francisco startup said in a blog post Tuesday that the committee will advise the full board on “critical safety and security decisions" for its projects and operations.

The safety committee arrives as debate swirls around AI safety at the company, which was thrust into the spotlight after a researcher, Jan Leike, resigned and leveled criticism at OpenAI for letting safety “take a backseat to shiny products." OpenAI co-founder and chief scientist Ilya Sutskever also resigned, and the company disbanded the “superalignment” team focused on AI risks that they jointly led, The AP reported.

OpenAI said it has “recently begun training its next frontier model” and its AI models lead the industry on capability and safety, though it made no mention of the controversy. “We welcome a robust debate at this important moment,” the company said.

AI models are prediction systems that are trained on vast datasets to generate on-demand text, images, video and human-like conversation. Frontier models are the most powerful, cutting edge AI systems.

The safety committee is filled with company insiders, including OpenAI CEO Sam Altman and Chairman Bret Taylor, and four OpenAI technical and policy experts. It also includes board members Adam D’Angelo, who’s the CEO of Quora, and Nicole Seligman, a former Sony general counsel.

The committee's first job will be to evaluate and further develop OpenAI’s processes and safeguards and make its recommendations to the board in 90 days. The company said it will then publicly release the recommendations it's adopting “in a manner that is consistent with safety and security.”



Alphabet to Roll out Image Generation of People on Gemini after Pause

A large Google logo is seen at Google's Bay View campus in Mountain View, California on August 13, 2024. (AFP)
A large Google logo is seen at Google's Bay View campus in Mountain View, California on August 13, 2024. (AFP)
TT

Alphabet to Roll out Image Generation of People on Gemini after Pause

A large Google logo is seen at Google's Bay View campus in Mountain View, California on August 13, 2024. (AFP)
A large Google logo is seen at Google's Bay View campus in Mountain View, California on August 13, 2024. (AFP)

Alphabet's Google said on Wednesday it has updated Gemini's AI image-creation model and would roll out the generation of visuals of people in the coming days, after months-long pause of the capability.

In February, Google had paused its AI tool that creates images of people, following inaccuracies in some historical depictions generated by the model.

The issues, where the AI model returned historical images which were sometimes inaccurate, drew flak from users.

The company said it has worked to improve the product, adhere to "product principles" and simulated situations to find weaknesses.

The feature will be made available first to paid users of the Gemini AI chatbot, starting in English and later roll out the model to bring more users and languages.

Google said it has improved the Imagen 3 model to create better images of people, but it would not generate images of specific people, children or graphic content.

OpenAI's Dall-E, Microsoft's CoPilot and recently xAI's Grok are among other AI chatbots that can now generate images.

The search engine giant also said over the coming days, subscribers to Gemini Advanced, Business and Enterprise would have access to chatting with "Gems" or chatbots customized for specific purposes.

Users can write specific instructions for particular purposes and create a Gem, saving them time from rewriting prompts for repetitive use cases.