Google Combats ISIS through Limiting Related Search Results

Google Combats ISIS through Limiting Related Search Results
TT
20

Google Combats ISIS through Limiting Related Search Results

Google Combats ISIS through Limiting Related Search Results

The spokesman for the global coalition to defeat ISIS announced that May witnessed a drop of 92 percent in Twitter links that take users to pro-ISIS websites.

He told al-Hurra television on Friday that the coalition had set up a hashtag on Twitter aimed at directing users to ways to inform officials of any pro-ISIS content.

The hashtag explains that users are capable of defeating ISIS through the click of a button. If a user sees any ISIS content, they should click on the associated hashtag icon and they will be presented with a guide on how to counter the content.

The same method can be applied to Facebook.

Last week, Google unveiled new technology aimed at decreasing the user access to terrorist videos posted on the internet. This technology was a result of efforts by social media giants Twitter, Facebook and YouTube, which is owned by Google.

A YouTube spokesman explained that when people usually search for a specific video on YouTube, they write a keyword or a number of keywords into a search engine. What usually happens is that the desired video appears in the results. According to the new technology however, a short video about terrorism and terrorists appears in the results in order to warn the user against heading to a certain website or viewing a certain video.

He said that a redirect video appears when a user searches for terrorist or terror-linked videos.

This technology was used to combat racism on YouTube, he revealed. This same method could also be used in the war against terrorism.

In March, several major US companies withdrew their ads from YouTube in protest against it allowing racist, sexual and unethical videos from being posted on its platform. These companies included telecommunications giant Verizon and medical product heavyweight Johnson and Johnson.

The new technology was however criticized by free speech organizations. Executive Director of the Washington-based Center for Digital Democracy Jeff Chester said it was clear that ad companies are exploiting the war against violence, discrimination and terrorism to influence content on social media. This means that the credibility and neutrality of these sites is affected.

Google, on the other hand, considered that searching for principles of Islam could lead the user to sites connected to hate groups that promote violence instead of tolerance. The company therefore became meticulous in displaying results linked to Islam in order to prevent the promotion of wrong information about the religion. This will help limit misinterpretations of the religion, including its teachings on jihad and Sharia.

Several users, especially western ones, connect Islam to terrorist crimes, even though several Muslims in the West are often the victims of racist and hate crimes.

Google relied on mathematical algorithms to assess the search results and determine whether they are offensive to religion or not. If so, the search engine would prevent the display of the offensive websites in the search results. Users had in the past come across numerous websites that are filled with incorrect information on religion when using, for example, “jihad” and “Sharia” in keyword searches.

In addition, Google altered its autofill service, wherein in the past when a user typed in “does Islam…” in the search box, the autofill technology in the past would have completed the inquiry with “… permit terrorism?” The autofill result of “do Muslim women … need saving?” is also now a thing of the past with the new Google technology.

Google had adopted this same approach in combating discrimination against Christianity and Judaism.

Last week, YouTube implemented a new method of countering terrorism by redirecting users searching for violent and extremist content towards anti-ISIS videos. Aimed at targeting terrorist thought before its inception, instead of displaying ISIS videos, the user is shown a video of former ISIS members recounting the details of their ordeal when they were part of the terrorist group. They are also redirected to discussions by religious figures, speaking against extremist ideology.



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.