OpenAI's Internal AI Details Stolen in 2023 Breach

FILE PHOTO: AI (Artificial Intelligence) letters and robot miniature in this illustration taken, June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: AI (Artificial Intelligence) letters and robot miniature in this illustration taken, June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo
TT

OpenAI's Internal AI Details Stolen in 2023 Breach

FILE PHOTO: AI (Artificial Intelligence) letters and robot miniature in this illustration taken, June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: AI (Artificial Intelligence) letters and robot miniature in this illustration taken, June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

A hacker gained access to the internal messaging systems at OpenAI last year and stole details about the design of the company's artificial intelligence technologies, the New York Times reported on Thursday.
The hacker lifted details from discussions in an online forum where employees talked about OpenAI's latest technologies, the report said, citing two people familiar with the incident.
However, they did not get into the systems where OpenAI, the firm behind chatbot sensation ChatGPT, houses and builds its AI, the report added.
Microsoft Corp-backed OpenAI did not immediately respond to a Reuters request for comment.
OpenAI executives informed both employees at an all-hands meeting in April last year and the company's board about the breach, according to the report, but executives decided not to share the news publicly as no information about customers or partners had been stolen.
OpenAI executives did not consider the incident a national security threat, believing the hacker was a private individual with no known ties to a foreign government, the report said. The San Francisco-based company did not inform the federal law enforcement agencies about the breach, it added.
OpenAI in May said it had disrupted five covert influence operations that sought to use its AI models for "deceptive activity" across the internet, the latest to stir safety concerns about the potential misuse of the technology.
The Biden administration was poised to open up a new front in its effort to safeguard the US AI technology from China and Russia with preliminary plans to place guardrails around the most advanced AI Models including ChatGPT, Reuters earlier reported, citing sources.
In May, 16 companies developing AI pledged at a global meeting to develop the technology safely at a time when regulators are scrambling to keep up with rapid innovation and emerging risks.



Google Tests Verified Check Marks in Search Results

A logo of Google is seen on the wall during the groundbreaking ceremony for Malaysia's first Google data center in Kuala Lumpur, Malaysia, 01 October 2024. (EPA)
A logo of Google is seen on the wall during the groundbreaking ceremony for Malaysia's first Google data center in Kuala Lumpur, Malaysia, 01 October 2024. (EPA)
TT

Google Tests Verified Check Marks in Search Results

A logo of Google is seen on the wall during the groundbreaking ceremony for Malaysia's first Google data center in Kuala Lumpur, Malaysia, 01 October 2024. (EPA)
A logo of Google is seen on the wall during the groundbreaking ceremony for Malaysia's first Google data center in Kuala Lumpur, Malaysia, 01 October 2024. (EPA)

Alphabet's Google is testing showing check marks next to certain companies on its search results, a company spokesperson said on Friday, in a move aimed at helping users identify verified sources and steer clear of fake websites.

Fraudulent websites impersonating official businesses or services could creep up in online search results, leading users to view false information about the business, deceiving users and potentially harming the brand.

"We regularly experiment with features that help shoppers identify trustworthy businesses online, and we are currently running a small experiment showing checkmarks next to certain businesses on Google," the spokesperson said.

Google already uses automated systems to identify pages with "scammy" or fraudulent content and prevent them from showing up in the search results.

The Verge reported the development earlier on Friday, adding that it spotted blue verified checkmarks next to official site links for companies including Microsoft, Meta and Apple on search results.

Only some users were able to see the feature, the Verge said, indicating Google has not rolled out the test widely yet.