Microsoft to Let Clients Build AI Agents for Routine Tasks from November

General view of Microsoft Corporation headquarters at Issy-les-Moulineaux, near Paris, France, April 18, 2016. REUTERS/Charles Platiau/File Photo
General view of Microsoft Corporation headquarters at Issy-les-Moulineaux, near Paris, France, April 18, 2016. REUTERS/Charles Platiau/File Photo
TT
20

Microsoft to Let Clients Build AI Agents for Routine Tasks from November

General view of Microsoft Corporation headquarters at Issy-les-Moulineaux, near Paris, France, April 18, 2016. REUTERS/Charles Platiau/File Photo
General view of Microsoft Corporation headquarters at Issy-les-Moulineaux, near Paris, France, April 18, 2016. REUTERS/Charles Platiau/File Photo

Microsoft will allow its customers to build autonomous artificial intelligence agents from next month, in its latest push to tap the booming technology amid growing investor scrutiny of its hefty AI investments.
The company is positioning autonomous agents - programs that need little human intervention unlike chatbots - as "apps for an AI-driven world" that can handle client queries, identify sales leads and manage inventory, Reuters said.
Other big technology companies such as Salesforce have also touted the potential of such agents, tools that some analysts say could provide companies with an easier path to monetizing the billions of dollars they are pouring into AI.
Microsoft said its customers can use Copilot Studio - an application that requires little knowledge of computer code - to create such agents in public preview from November. It is using several AI models developed in-house and by OpenAI for the agents.
The company is also introducing 10 ready-for-use agents that can help with routine tasks ranging from managing supply chain to expense tracking and client communications.
In a demo, McKinsey & Co, which had early access to the tools, created an agent that can manage client inquires by checking interaction history, identifying the consultant for the task and scheduling a follow-up meeting.
"The idea is that Copilot (the company's chatbot) is the user interface for AI," Charles Lamanna, corporate vice president of business and industry Copilot at Microsoft, told Reuters.
"Every employee will have a Copilot, their personalized AI agent, and then they will use that Copilot to interface and interact with the sea of AI agents that will be out there."
Tech giants are facing pressure to show returns on their big AI investments. Microsoft's shares fell 2.8% in the September quarter, underperforming the S&P 500, but remain more than 10% higher for the year.
Some concerns have risen in recent months about the pace of Copilot adoption, with research firm Gartner saying in August its survey of 152 IT organizations showed the vast majority had not progressed their Copilot initiatives past the pilot stage.



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.