From Marketing to Design, Brands Adopt AI Tools despite Risk

This illustration released by Instacart depicts the grocery delivery company's app which can integrate ChatGPT to answer customers' food questions. (Instacart, Inc. via AP)
This illustration released by Instacart depicts the grocery delivery company's app which can integrate ChatGPT to answer customers' food questions. (Instacart, Inc. via AP)
TT

From Marketing to Design, Brands Adopt AI Tools despite Risk

This illustration released by Instacart depicts the grocery delivery company's app which can integrate ChatGPT to answer customers' food questions. (Instacart, Inc. via AP)
This illustration released by Instacart depicts the grocery delivery company's app which can integrate ChatGPT to answer customers' food questions. (Instacart, Inc. via AP)

Even if you haven’t tried artificial intelligence tools that can write essays and poems or conjure new images on command, chances are the companies that make your household products are already starting to do so.

Mattel has put the AI image generator DALL-E to work by having it come up with ideas for new Hot Wheels toy cars. Used vehicle seller CarMax is summarizing thousands of customer reviews with the same “generative” AI technology that powers the popular chatbot ChatGPT.

Meanwhile, Snapchat is bringing a chatbot to its messaging service. And the grocery delivery company Instacart is integrating ChatGPT to answer customers’ food questions, The Associated Press said.

Coca-Cola plans to use generative AI to help create new marketing content. And while the company hasn’t detailed exactly how it plans to deploy the technology, the move reflects the growing pressure on businesses to harness tools that many of their employees and consumers are already trying on their own.

“We must embrace the risks,” said Coca-Cola CEO James Quincey in a recent video announcing a partnership with startup OpenAI — maker of both DALL-E and ChatGPT — through an alliance led by the consulting firm Bain. “We need to embrace those risks intelligently, experiment, build on those experiments, drive scale, but not taking those risks is a hopeless point of view to start from.”

Indeed, some AI experts warn that businesses should carefully consider potential harms to customers, society and their own reputations before rushing to embrace ChatGPT and similar products in the workplace.

“I want people to think deeply before deploying this technology,” said Claire Leibowicz of The Partnership on AI, a nonprofit group founded and sponsored by the major tech providers that recently released a set of recommendations for companies producing AI-generated synthetic imagery, audio and other media. “They should play around and tinker, but we should also think, what purpose are these tools serving in the first place?”

Some companies have been experimenting with AI for a while. Mattel revealed its use of OpenAI’s image generator in October as a client of Microsoft, which has a partnership with OpenAI that enables it to integrate its technology into Microsoft’s cloud computing platform.

But it wasn’t until the November 30 release of OpenAI’s ChatGPT, a free public tool, that widespread interest in generative AI tools began seeping into workplaces and executive suites.

“ChatGPT really sort of brought it home how powerful they were,” said Eric Boyd, a Microsoft executive who leads its AI platform. ”That’s changed the conversation in a lot of people’s minds where they really get it on a deeper level. My kids use it and my parents use it.”

There is reason for caution, however. While text generators like ChatGPT and Microsoft’s Bing chatbot can make the process of writing emails, presentations and marketing pitches faster and easier, they also have a tendency to confidently present misinformation as fact. Image generators trained on a huge trove of digital art and photography have raised copyright concerns from the original creators of those works.

“For companies that are really in the creative industry, if they want to make sure that they have copyright protection for (the outputs of) those models, that’s still an open question,” said attorney Anna Gressel of the law firm Debevoise & Plimpton, which advises businesses on how to use AI.

A safer use has been thinking of the tools as a brainstorming “thought partner” that won’t produce the final product, Gressel said.

“It helps create mock ups that then are going to be turned by a human into something that is more concrete,” she said.

And that also helps ensure that humans don’t get replaced by AI. Forrester analyst Rowan Curran said the tools should speed up some of the “nitty-gritty” of office tasks — much like previous innovations such as word processors and spell checkers — rather than putting people out of work, as some fear.

“Ultimately it’s part of the workflow,” Curran said. “It’s not like we’re talking about having a large language model just generate an entire marketing campaign and have that launch without expert senior marketers and all kinds of other controls.”

For consumer-facing chatbots getting integrated into smartphone apps, it gets a little trickier, Curran said, with a need for guardrails around technology that can respond to users’ questions in unexpected ways.

Public awareness fueled growing competition between cloud computing providers Microsoft, Amazon and Google, which sell their services to big organizations and have the massive computing power needed to train and operate AI models. Microsoft announced earlier this year it was investing billions more dollars into its partnership with OpenAI, though it also competes with the startup as a direct provider of AI tools.

Google, which pioneered advancements in generative AI but has been cautious about introducing them to the public, is now playing catch up to capture its commercial possibilities including an upcoming Bard chatbot. Facebook parent Meta, another AI research leader, builds similar technology but doesn’t sell it to businesses in the same way as its big tech peers.

Amazon has taken a more muted tone, but makes its ambitions clear through its partnerships — most recently an expanded collaboration between its cloud computing division AWS and the startup Hugging Face, maker of a ChatGPT rival called Bloom.

Hugging Face decided to double down on its Amazon partnership after seeing the explosion of demand for generative AI products, said Clement Delangue, the startup’s co-founder and CEO. But Delangue contrasted his approach with competitors such as OpenAI, which doesn’t disclose its code and datasets.

Hugging Face hosts a platform that allows developers to share open-source AI models for text, image and audio tools, which can lay the foundation for building different products. That transparency is “really important because that’s the way for regulators, for example, to understand these models and be able to regulate,” he said.

It is also a way for “underrepresented people to understand where the biases can be (and) how the models have been trained,” so that the bias can be mitigated, Delangue said.



Swiss Interior Minister Open to Social Media Ban for Children

A teenager poses holding a mobile phone displaying a message from TikTok as law banning social media for users under 16 in Australia takes effect, in Sydney, Australia, December 10, 2025. (Reuters)
A teenager poses holding a mobile phone displaying a message from TikTok as law banning social media for users under 16 in Australia takes effect, in Sydney, Australia, December 10, 2025. (Reuters)
TT

Swiss Interior Minister Open to Social Media Ban for Children

A teenager poses holding a mobile phone displaying a message from TikTok as law banning social media for users under 16 in Australia takes effect, in Sydney, Australia, December 10, 2025. (Reuters)
A teenager poses holding a mobile phone displaying a message from TikTok as law banning social media for users under 16 in Australia takes effect, in Sydney, Australia, December 10, 2025. (Reuters)

Switzerland must do more to shield children from social media risks, Interior Minister Elisabeth Baume-Schneider was quoted as saying on Sunday, signaling she was open to a potential ban on the platforms for youngsters.

Following Australia's recent ban on social media for under-16s, Baume-Schneider told SonntagsBlick newspaper that Switzerland should examine similar measures.

"The debate in Australia and the ‌EU is ‌important. It must also ‌be ⁠conducted in Switzerland. ‌I am open to a social media ban," said the minister, a member of the center-left Social Democrats. "We must better protect our children."

She said authorities needed to look at what should be restricted, listing options ⁠such as banning social media use by children, ‌curbing harmful content, and addressing ‍algorithms that prey on ‍young people's vulnerabilities.

Detailed discussions will begin ‍in the new year, supported by a report on the issue, Baume-Schneider said, adding: "We mustn't forget social media platforms themselves: they must take responsibility for what children and young people consume."

Australia's ban has won praise ⁠from many parents and groups advocating for the welfare of children, and drawn criticism from major technology companies and defenders of free speech.

Earlier this month, the parliament of the Swiss canton of Fribourg voted to prohibit children from using mobile phones at school until they are about 15, the latest step taken at ‌a local level in Switzerland to curb their use in schools.


Google Warns Staff with US Visas against International Travel

FILE PHOTO: The Google logo is displayed during a press conference in Berlin, Germany, November 11, 2025. REUTERS/Lisi Niesner/File Photo
FILE PHOTO: The Google logo is displayed during a press conference in Berlin, Germany, November 11, 2025. REUTERS/Lisi Niesner/File Photo
TT

Google Warns Staff with US Visas against International Travel

FILE PHOTO: The Google logo is displayed during a press conference in Berlin, Germany, November 11, 2025. REUTERS/Lisi Niesner/File Photo
FILE PHOTO: The Google logo is displayed during a press conference in Berlin, Germany, November 11, 2025. REUTERS/Lisi Niesner/File Photo

Alphabet's Google has advised some employees on US visas to avoid international travel due to delays at embassies, Business Insider reported on Friday, citing an internal email.

The email, sent by the company's outside counsel BAL Immigration Law on Thursday, warned staff who need a visa ⁠stamp to re-enter the United States not to leave the country because visa processing times have lengthened, the report said.

Google did not immediately respond to a Reuters request for comment.

Some US embassies and consulates face visa ⁠appointment delays of up to 12 months, the memo said, warning that international travel will "risk an extended stay outside the US", according to the report.

The administration of President Donald Trump this month announced increased vetting of applicants for H-1B visas for highly skilled workers, including screening social media accounts.

The H-1B visa program, widely used by the US ⁠technology sector to hire skilled workers from India and China, has been under the spotlight after the Trump administration imposed a $100,000 fee for new applications this year.

In September, Google's parent company Alphabet had strongly advised its employees to avoid international travel and urged H-1B visa holders to remain in the US, according to an email seen by Reuters.


AI Boom Drives Data-Center Dealmaking to Record High, Says Report

AI (Artificial Intelligence) letters and robot hand are placed on computer motherboard in this illustration created on June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo
AI (Artificial Intelligence) letters and robot hand are placed on computer motherboard in this illustration created on June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo
TT

AI Boom Drives Data-Center Dealmaking to Record High, Says Report

AI (Artificial Intelligence) letters and robot hand are placed on computer motherboard in this illustration created on June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo
AI (Artificial Intelligence) letters and robot hand are placed on computer motherboard in this illustration created on June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

Global data-center dealmaking surged to a record high through November this year, driven by an insatiable demand for ​computing infrastructure to meet the boom in artificial intelligence usage.

Data from S&P Global Market Intelligence showed that there were more than 100 data center transactions during the period, with the total value sitting just under $61 billion.

WHY ‌IT'S IMPORTANT

Interest ‌in data centers ‌has ⁠swelled ​this ‌year as tech giants and AI hyperscalers have planned billions of dollars in spending to scale up infrastructure.

AI-related companies have powered much of the gains in US stocks this year, but concerns over lofty ⁠valuations and debt-fueled spending have also sparked worries ‌over how quickly corporates can ‍turn the investments ‍into profits.

BY THE NUMBERS

Including M&As, asset ‍sales and equity investments, data center investments hit nearly $61 billion through the end of November, already surpassing 2024's record high $60.81 billion.

Since ​2019, data center dealmaking in the US and Canada totaled about $160 billion, ⁠with Asia-Pacific reaching nearly $40 billion and Europe $24.2 billion.

GRAPHIC KEY QUOTE

"High interest comes from financial sponsors, which are attracted by the risk/reward profile of such assets. Private equity firms are eager buyers but are generally reluctant sellers, creating an environment where availability for sale of high-quality data center assets is scarce," said Iuri ‌Struta, TMT analyst at S&P Global Market Intelligence.