AI's Relentless Rise Gives Journalists Tough Choices

The arrival of ChatGPT sent shockwaves through the journalism industry. Kirill KUDRYAVTSEV / AFP
The arrival of ChatGPT sent shockwaves through the journalism industry. Kirill KUDRYAVTSEV / AFP
TT

AI's Relentless Rise Gives Journalists Tough Choices

The arrival of ChatGPT sent shockwaves through the journalism industry. Kirill KUDRYAVTSEV / AFP
The arrival of ChatGPT sent shockwaves through the journalism industry. Kirill KUDRYAVTSEV / AFP

The rise of artificial intelligence has forced an increasing number of journalists to grapple with the ethical and editorial challenges posed by the rapidly expanding technology.
AI's role in assisting newsrooms or transforming them completely was among the questions raised at the International Journalism Festival in the Italian city of Perugia that closes on Sunday.
- What will happen to jobs? -
AI tools imitating human intelligence are widely used in newsrooms around the world to transcribe sound files, summarize texts and translate.
In early 2023, Germany's Axel Springer group announced it was cutting jobs at the Bild and Die Welt newspapers, saying AI could now "replace" some of its journalists.
Generative AI -- capable of producing text and images following a simple request in everyday language -- has been opening new frontiers as well as raising concerns for a year and a half, AFP reported.
One issue is that voices and faces can now be cloned to produce a podcast or present news on television. Last year, Filipino website Rappler created a brand aimed at young audiences by converting its long articles into comics, graphics and even videos.
Media professionals agree that their trade must now focus on tasks offering the greatest "added value".
"You're the one who is doing the real stuff" and "the tools that we produce will be an assistant to you," Google News general manager Shailesh Prakash told the festival in Perugia.
All about the money
The costs of generative AI have plummeted since ChatGPT burst onto the scene in late 2022, with the tool designed by US start-up OpenAI now accessible to smaller newsrooms.
Colombian investigative outlet Cuestion Publica has harnessed engineers to develop a tool that can delve into its archives and find relevant background information in the event of breaking news.
But many media organizations are not making their own language models, which are at the core of AI interfaces, said University of Amsterdam professor Natali Helberger. They are needed for "safe and trustworthy technology", he stressed.
The disinformation threat
According to one estimate last year by Everypixel Journal, AI has created as many images in one year as photography in 150 years.
That has raised serious questions about how news can be fished out of the tidal wave of content, including deepfakes.
Media and tech organizations are teaming up to tackle the threat, notably through the Coalition for Content Provenance and Authenticity, which seeks to set common standards.
"The core of our job is news gathering, on-the-ground reporting," said Sophie Huet, recently appointed to become global news director for editorial innovation and artificial intelligence at Agence France-Presse.
"We'll rely for a while on human reporters," she added, although that might be with the help of artificial intelligence.
From Wild West to regulation
Media rights watchdog Reporters Without Borders, which has expanded its media rights brief to defending trustworthy news, launched the Paris Charter on AI and journalism late last year.
"One of the things I really liked about the Paris Charter was the emphasis on transparency," said Anya Schiffrin, a lecturer on global media, innovation and human rights at Columbia University in the United States.
"To what extent will publishers have to disclose when they are using generative IA?"
Olle Zachrison, head of AI and news strategy at public broadcaster Swedish Radio, said there was "a serious debate going on: should you mark out AI content or should people trust your brand?"
Regulation remains in its infancy in the face of a constantly evolving technology.
In March, the European Parliament adopted a framework law aiming to regulate AI models without holding back innovation, while guidelines and charters are increasingly common in newsrooms.
AI editorial guidelines are updated every three months at India's Quintillion Media, said its boss Ritu Kapur.
None of the organization's articles can be written by AI and the images it generates cannot represent real life.
Resist or collaborate?
AI models feed off data, but their thirst for the vital commodity has raised hackles among providers.
In December, the New York Times sued OpenAI and its main investor Microsoft for violation of copyright.
In contrast, other media organizations have struck deals with OpenAI: Axel Springer, US news agency AP, French daily Le Monde and Spanish group Prisa Media whose titles include El Pais and AS newspapers.
With resources tight in the media industry, collaborating with the new technology is tempting, explained Emily Bell, a professor at Columbia University's journalism school.
She senses a growing external pressure to "Get on board, don't miss the train".



AI No Better Than Other Methods for Patients Seeking Medical Advice, Study Shows

AI (Artificial Intelligence) letters and a robot hand are placed on a computer motherboard in this illustration created on June 23, 2023. (Reuters)
AI (Artificial Intelligence) letters and a robot hand are placed on a computer motherboard in this illustration created on June 23, 2023. (Reuters)
TT

AI No Better Than Other Methods for Patients Seeking Medical Advice, Study Shows

AI (Artificial Intelligence) letters and a robot hand are placed on a computer motherboard in this illustration created on June 23, 2023. (Reuters)
AI (Artificial Intelligence) letters and a robot hand are placed on a computer motherboard in this illustration created on June 23, 2023. (Reuters)

Asking AI about medical symptoms does not help patients make better decisions about their health than other methods, such as a standard internet search, according to a new study published in Nature Medicine.

The authors said the study was important as people were increasingly turning to AI and chatbots for advice on their health, but without evidence that this was necessarily the best and safest approach.

Researchers led by the University of Oxford’s Internet Institute worked alongside a group of doctors to draw up 10 different medical scenarios, ranging from a common cold to a life-threatening hemorrhage causing bleeding on the brain.

When tested without human participants, three large-language models – Open AI's Chat GPT-4o, ‌Meta's Llama ‌3 and Cohere's Command R+ – identified the conditions in ‌94.9% ⁠of cases, ‌and chose the correct course of action, like calling an ambulance or going to the doctor, in an average of 56.3% of cases. The companies did not respond to requests for comment.

'HUGE GAP' BETWEEN AI'S POTENTIAL AND ACTUAL PERFORMANCE

The researchers then recruited 1,298 participants in Britain to either use AI, or their usual resources like an internet search, or their experience, or the National Health Service website to ⁠investigate the symptoms and decide their next step.

When the participants did this, relevant conditions were identified in ‌less than 34.5% of cases, and the right ‍course of action was given in ‍less than 44.2%, no better than the control group using more traditional ‍tools.

Adam Mahdi, co-author of the paper and associate professor at Oxford, said the study showed the “huge gap” between the potential of AI and the pitfalls when it was used by people.

“The knowledge may be in those bots; however, this knowledge doesn’t always translate when interacting with humans,” he said, meaning that more work was needed to identify why this was happening.

HUMANS OFTEN GIVING INCOMPLETE INFORMATION

The ⁠team studied around 30 of the interactions in detail, and concluded that often humans were providing incomplete or wrong information, but the LLMs were also sometimes generating misleading or incorrect responses.

For example, one patient reporting the symptoms of a subarachnoid hemorrhage – a life-threatening condition causing bleeding on the brain – was correctly told by AI to go to hospital after describing a stiff neck, light sensitivity and the "worst headache ever". The other described the same symptoms but a "terrible" headache, and was told to lie down in a darkened room.

The team now plans a similar study in different countries and languages, and over time, to test if that impacts AI’s performance.

The ‌study was supported by the data company Prolific, the German non-profit Dieter Schwarz Stiftung, and the UK and US governments.


Meta Criticizes EU Antitrust Move Against WhatsApp Block on AI Rivals

(FILES) This illustration photograph taken on December 1, 2025, shows the logo of WhatsApp displayed on a smartphone's screen, in Frankfurt am Main, western Germany. (Photo by Kirill KUDRYAVTSEV / AFP)
(FILES) This illustration photograph taken on December 1, 2025, shows the logo of WhatsApp displayed on a smartphone's screen, in Frankfurt am Main, western Germany. (Photo by Kirill KUDRYAVTSEV / AFP)
TT

Meta Criticizes EU Antitrust Move Against WhatsApp Block on AI Rivals

(FILES) This illustration photograph taken on December 1, 2025, shows the logo of WhatsApp displayed on a smartphone's screen, in Frankfurt am Main, western Germany. (Photo by Kirill KUDRYAVTSEV / AFP)
(FILES) This illustration photograph taken on December 1, 2025, shows the logo of WhatsApp displayed on a smartphone's screen, in Frankfurt am Main, western Germany. (Photo by Kirill KUDRYAVTSEV / AFP)

Meta Platforms on Monday criticized EU regulators after they charged the US tech giant with breaching antitrust rules and threaten to halt its block on ⁠AI rivals on its messaging service WhatsApp.

"The facts are that there is no reason for ⁠the EU to intervene in the WhatsApp Business API. There are many AI options and people can use them from app stores, operating systems, devices, websites, and ⁠industry partnerships," a Meta spokesperson said in an email.

"The Commission's logic incorrectly assumes the WhatsApp Business API is a key distribution channel for these chatbots."


Chinese Robot Makers Ready for Lunar New Year Entertainment Spotlight

A folk performer breathes fire during a performance ahead of Lunar New Year celebrations in a village in Huai'an, in China's eastern Jiangsu Province on February 7, 2026. (AFP)
A folk performer breathes fire during a performance ahead of Lunar New Year celebrations in a village in Huai'an, in China's eastern Jiangsu Province on February 7, 2026. (AFP)
TT

Chinese Robot Makers Ready for Lunar New Year Entertainment Spotlight

A folk performer breathes fire during a performance ahead of Lunar New Year celebrations in a village in Huai'an, in China's eastern Jiangsu Province on February 7, 2026. (AFP)
A folk performer breathes fire during a performance ahead of Lunar New Year celebrations in a village in Huai'an, in China's eastern Jiangsu Province on February 7, 2026. (AFP)

In China, humanoid robots are serving as Lunar New Year entertainment, with their manufacturers pitching their song-and-dance skills to the general public as well as potential customers, investors and government officials.

On Sunday, Shanghai-based robotics start-up Agibot live-streamed an almost hour-long variety show featuring its robots dancing, performing acrobatics and magic, lip-syncing ballads and performing in comedy sketches. Other Agibot humanoid robots waved from an audience section.

An estimated 1.4 million people watched on the Chinese streaming platform Douyin. Agibot, which called the promotional stunt "the world's first robot-powered gala," did not have an immediate estimate for total viewership.

The ‌show ran a ‌week ahead of China's annual Spring Festival gala ‌to ⁠be aired ‌by state television, an event that has become an important - if unlikely - venue for Chinese robot makers to show off their success.

A squad of 16 full-size humanoids from Unitree joined human dancers in performing at China Central Television's 2025 gala, drawing stunned accolades from millions of viewers.

Less than three weeks later, Unitree's founder was invited to a high-profile symposium chaired by Chinese President Xi Jinping. The Hangzhou-based robotics ⁠firm has since been preparing for a potential initial public offering.

This year's CCTV gala will include ‌participation by four humanoid robot startups, Unitree, Galbot, Noetix ‍and MagicLab, the companies and broadcaster ‍have said.

Agibot's gala employed over 200 robots. It was streamed on social ‍media platforms RedNote, Sina Weibo, TikTok and its Chinese version Douyin. Chinese-language television networks HTTV and iCiTi TV also broadcast the performance.

"When robots begin to understand Lunar New Year and begin to have a sense of humor, the human-computer interaction may come faster than we think," Ma Hongyun, a photographer and writer with 4.8 million followers on Weibo, said in a post.

Agibot, which says ⁠its humanoid robots are designed for a range of applications, including in education, entertainment and factories, plans to launch an initial public offering in Hong Kong, Reuters has reported.

State-run Securities Times said Agibot had opted out of the CCTV gala in order to focus spending on research and development. The company did not respond to a request for comment.

The company demonstrated two of its robots to Xi during a visit in April last year.

US billionaire Elon Musk, who has pivoted automaker Tesla toward a focus on artificial intelligence and the Optimus humanoid robot, has said the only competitive threat he faces in robotics is from Chinese firms.