From Marketing to Design, Brands Adopt AI Tools despite Risk

This illustration released by Instacart depicts the grocery delivery company's app which can integrate ChatGPT to answer customers' food questions. (Instacart, Inc. via AP)
This illustration released by Instacart depicts the grocery delivery company's app which can integrate ChatGPT to answer customers' food questions. (Instacart, Inc. via AP)
TT

From Marketing to Design, Brands Adopt AI Tools despite Risk

This illustration released by Instacart depicts the grocery delivery company's app which can integrate ChatGPT to answer customers' food questions. (Instacart, Inc. via AP)
This illustration released by Instacart depicts the grocery delivery company's app which can integrate ChatGPT to answer customers' food questions. (Instacart, Inc. via AP)

Even if you haven’t tried artificial intelligence tools that can write essays and poems or conjure new images on command, chances are the companies that make your household products are already starting to do so.

Mattel has put the AI image generator DALL-E to work by having it come up with ideas for new Hot Wheels toy cars. Used vehicle seller CarMax is summarizing thousands of customer reviews with the same “generative” AI technology that powers the popular chatbot ChatGPT.

Meanwhile, Snapchat is bringing a chatbot to its messaging service. And the grocery delivery company Instacart is integrating ChatGPT to answer customers’ food questions, The Associated Press said.

Coca-Cola plans to use generative AI to help create new marketing content. And while the company hasn’t detailed exactly how it plans to deploy the technology, the move reflects the growing pressure on businesses to harness tools that many of their employees and consumers are already trying on their own.

“We must embrace the risks,” said Coca-Cola CEO James Quincey in a recent video announcing a partnership with startup OpenAI — maker of both DALL-E and ChatGPT — through an alliance led by the consulting firm Bain. “We need to embrace those risks intelligently, experiment, build on those experiments, drive scale, but not taking those risks is a hopeless point of view to start from.”

Indeed, some AI experts warn that businesses should carefully consider potential harms to customers, society and their own reputations before rushing to embrace ChatGPT and similar products in the workplace.

“I want people to think deeply before deploying this technology,” said Claire Leibowicz of The Partnership on AI, a nonprofit group founded and sponsored by the major tech providers that recently released a set of recommendations for companies producing AI-generated synthetic imagery, audio and other media. “They should play around and tinker, but we should also think, what purpose are these tools serving in the first place?”

Some companies have been experimenting with AI for a while. Mattel revealed its use of OpenAI’s image generator in October as a client of Microsoft, which has a partnership with OpenAI that enables it to integrate its technology into Microsoft’s cloud computing platform.

But it wasn’t until the November 30 release of OpenAI’s ChatGPT, a free public tool, that widespread interest in generative AI tools began seeping into workplaces and executive suites.

“ChatGPT really sort of brought it home how powerful they were,” said Eric Boyd, a Microsoft executive who leads its AI platform. ”That’s changed the conversation in a lot of people’s minds where they really get it on a deeper level. My kids use it and my parents use it.”

There is reason for caution, however. While text generators like ChatGPT and Microsoft’s Bing chatbot can make the process of writing emails, presentations and marketing pitches faster and easier, they also have a tendency to confidently present misinformation as fact. Image generators trained on a huge trove of digital art and photography have raised copyright concerns from the original creators of those works.

“For companies that are really in the creative industry, if they want to make sure that they have copyright protection for (the outputs of) those models, that’s still an open question,” said attorney Anna Gressel of the law firm Debevoise & Plimpton, which advises businesses on how to use AI.

A safer use has been thinking of the tools as a brainstorming “thought partner” that won’t produce the final product, Gressel said.

“It helps create mock ups that then are going to be turned by a human into something that is more concrete,” she said.

And that also helps ensure that humans don’t get replaced by AI. Forrester analyst Rowan Curran said the tools should speed up some of the “nitty-gritty” of office tasks — much like previous innovations such as word processors and spell checkers — rather than putting people out of work, as some fear.

“Ultimately it’s part of the workflow,” Curran said. “It’s not like we’re talking about having a large language model just generate an entire marketing campaign and have that launch without expert senior marketers and all kinds of other controls.”

For consumer-facing chatbots getting integrated into smartphone apps, it gets a little trickier, Curran said, with a need for guardrails around technology that can respond to users’ questions in unexpected ways.

Public awareness fueled growing competition between cloud computing providers Microsoft, Amazon and Google, which sell their services to big organizations and have the massive computing power needed to train and operate AI models. Microsoft announced earlier this year it was investing billions more dollars into its partnership with OpenAI, though it also competes with the startup as a direct provider of AI tools.

Google, which pioneered advancements in generative AI but has been cautious about introducing them to the public, is now playing catch up to capture its commercial possibilities including an upcoming Bard chatbot. Facebook parent Meta, another AI research leader, builds similar technology but doesn’t sell it to businesses in the same way as its big tech peers.

Amazon has taken a more muted tone, but makes its ambitions clear through its partnerships — most recently an expanded collaboration between its cloud computing division AWS and the startup Hugging Face, maker of a ChatGPT rival called Bloom.

Hugging Face decided to double down on its Amazon partnership after seeing the explosion of demand for generative AI products, said Clement Delangue, the startup’s co-founder and CEO. But Delangue contrasted his approach with competitors such as OpenAI, which doesn’t disclose its code and datasets.

Hugging Face hosts a platform that allows developers to share open-source AI models for text, image and audio tools, which can lay the foundation for building different products. That transparency is “really important because that’s the way for regulators, for example, to understand these models and be able to regulate,” he said.

It is also a way for “underrepresented people to understand where the biases can be (and) how the models have been trained,” so that the bias can be mitigated, Delangue said.



Poland Urges Brussels to Probe TikTok Over AI-Generated Content

The TikTok logo is pictured outside the company's US head office in Culver City, California, US, September 15, 2020. (Reuters)
The TikTok logo is pictured outside the company's US head office in Culver City, California, US, September 15, 2020. (Reuters)
TT

Poland Urges Brussels to Probe TikTok Over AI-Generated Content

The TikTok logo is pictured outside the company's US head office in Culver City, California, US, September 15, 2020. (Reuters)
The TikTok logo is pictured outside the company's US head office in Culver City, California, US, September 15, 2020. (Reuters)

Poland has asked the European Commission to investigate TikTok after the social media platform hosted AI-generated content including calls for Poland to withdraw from the EU, it said on Tuesday, adding that the content was almost certainly Russian disinformation.

"The disclosed content poses a threat to public order, information security, and the integrity of democratic processes in Poland and across the European Union," Deputy Digitalization Minister Dariusz Standerski said in a letter sent to the Commission.

"The nature of ‌the narratives, ‌the manner in which they ‌are distributed, ⁠and the ‌use of synthetic audiovisual materials indicate that the platform is failing to comply with the obligations imposed on it as a Very Large Online Platform (VLOP)," he added.

A Polish government spokesperson said on Tuesday the content was undoubtedly Russian disinformation as the recordings contained Russian syntax.

TikTok, representatives ⁠of the Commission and of the Russian embassy in Warsaw did not ‌immediately respond to Reuters' requests for ‍comment.

EU countries are taking ‍measures to head off any foreign state attempts to ‍influence elections and local politics after warning of Russian-sponsored espionage and sabotage. Russia has repeatedly denied interfering in foreign elections.

Last year, the Commission opened formal proceedings against social media firm TikTok, owned by China's ByteDance, over its suspected failure to limit election interference, notably in ⁠the Romanian presidential vote in November 2024.

Poland called on the Commission to initiate proceedings in connection with suspected breaches of the bloc's sweeping Digital Services Act, which regulates how the world's biggest social media companies operate in Europe.

Under the Act, large internet platforms like X, Facebook, TikTok and others must moderate and remove harmful content like hate speech, racism or xenophobia. If they do not, the Commission can impose fines of up to 6% ‌of their worldwide annual turnover.


Saudi National Cybersecurity Authority Launches Service to Verify Suspicious Links

Saudi National Cybersecurity Authority Launches Service to Verify Suspicious Links
TT

Saudi National Cybersecurity Authority Launches Service to Verify Suspicious Links

Saudi National Cybersecurity Authority Launches Service to Verify Suspicious Links

The National Cybersecurity Authority has launched the “Tahqaq” service, aimed at enabling members of the public to proactively and safely deal with circulated links and instantly verify their reliability before visiting them.

This initiative comes within the authority’s strategic programs designed to empower individuals to enhance their cybersecurity, SPA reported.

The authority noted that the “Tahqaq” service allows users to scan circulated links and helps reduce the risks associated with using and visiting suspicious links that may lead to unauthorized access to data. The service also provides cybersecurity guidance to users, mitigating emerging cyber risks and boosting cybersecurity awareness across all segments of society.

The “Tahqaq” service is offered as part of the National Portal for Cybersecurity Services (Haseen) in partnership with the authority’s technical arm, the Saudi Information Technology Company (SITE). The service is available through the unified number on WhatsApp (+966118136644), as well as via the Haseen portal website at tahqaq.haseen.gov.sa.


Saudi Arabia’s Space Sector: A Strategic Pillar of a Knowledge-Based Economy

The Kingdom is developing an integrated sovereign space system encompassing infrastructure and applications, led by national expertise - SPA
The Kingdom is developing an integrated sovereign space system encompassing infrastructure and applications, led by national expertise - SPA
TT

Saudi Arabia’s Space Sector: A Strategic Pillar of a Knowledge-Based Economy

The Kingdom is developing an integrated sovereign space system encompassing infrastructure and applications, led by national expertise - SPA
The Kingdom is developing an integrated sovereign space system encompassing infrastructure and applications, led by national expertise - SPA

Saudi Arabia is undergoing significant transformations toward an innovation-driven knowledge economy, with the space sector emerging as a crucial pillar of Saudi Vision 2030. This sector has evolved from a scientific domain into a strategic driver for economic development, focusing on investing in talent, developing infrastructure, and strengthening international partnerships.

CEO of the Saudi Space Agency Dr. Mohammed Al-Tamimi emphasized that space is a vital tool for human development. He noted that space exploration has yielded significant benefits in telecommunications, navigation, and Earth observation, with many daily technologies stemming from space research, SPA reported.

Dr. Al-Tamimi highlighted a notable shift with the private sector's entry into the space industry, which is generating new opportunities. He stressed that Saudi Arabia aims not just to participate but to lead in creating an integrated space ecosystem encompassing legislation, investment, and innovation.

He also noted the sector's role in fostering national identity among youth, key drivers of the industry. Investing in them is crucial for the Kingdom's future, focusing on creating a space sector that empowers Saudi citizens.

In alignment with international efforts, the Saudi Space Agency signed an agreement with NASA for the first Saudi satellite dedicated to studying space weather, part of the Artemis II mission under a scientific cooperation framework established in July 2024.

According to SPA, the Kingdom is developing an integrated sovereign space system encompassing infrastructure and applications, led by national expertise. This initiative is supported by strategic investments and advanced technologies within a governance framework that meets international standards. Central to this vision is the Neo Space Group, owned by the Public Investment Fund, which aims to establish Saudi Arabia as a space leader.

Saudi Arabia views space as a strategic frontier for human development. Vision 2030 transforms space into a bridge between dreams and achievements, empowering Saudi youth to shape their futures. Space represents not just data and satellites but a national journey connecting ambition with innovation.