Scientists Train AI Model to Predict Future Illnesses

AI (Artificial Intelligence) letters and robot hand are placed on computer motherboard in this illustration created on June 23, 2023. (Reuters)
AI (Artificial Intelligence) letters and robot hand are placed on computer motherboard in this illustration created on June 23, 2023. (Reuters)
TT

Scientists Train AI Model to Predict Future Illnesses

AI (Artificial Intelligence) letters and robot hand are placed on computer motherboard in this illustration created on June 23, 2023. (Reuters)
AI (Artificial Intelligence) letters and robot hand are placed on computer motherboard in this illustration created on June 23, 2023. (Reuters)

Scientists said Wednesday that they had created an AI model able to predict medical diagnoses years in advance, building on the same technology behind consumer chatbots like ChatGPT.

Based on a patient's case history, the Delphi-2M AI "predicts the rates of more than 1,000 diseases" years into the future, the team from British, Danish, German and Swiss institutions wrote in a paper published in the journal Nature.

Researchers trained the model on data from Britain's UK Biobank -- a large-scale biomedical research database with details on about half a million participants.

Neural networks based on so-called "transformer" architecture -- the "T" in "ChatGPT" -- most famously tackle language-based tasks, as in the chatbot and its many imitators and competitors.

But understanding a sequence of medical diagnoses is "a bit like learning the grammar in a text," German Cancer Research Center AI expert Moritz Gerstung told journalists.

Delphi-2M "learns the patterns in healthcare data, preceding diagnoses, in which combinations they occur and in which succession", he said, enabling "very meaningful and health-relevant predictions".

Gerstung presented charts suggesting the AI could single out people at far higher or lower risk of suffering a heart attack than their age and other factors would predict.

The team verified Delphi-2M's performance by testing it against data from almost two million people in Denmark's public health database.

But Gerstung and fellow team members stressed that the Delphi-2M tool needed further testing and was not yet ready for clinical use.

"This is still a long way from improved healthcare as the authors acknowledge that both (British and Danish) datasets are biased in terms of age, ethnicity and current healthcare outcomes," commented health technology researcher Peter Bannister, a fellow at Britain's Institution of Engineering and Technology.

But in future systems like Delphi-2M could help "guide the monitoring and possibly earlier clinical interventions for effectively a preventative type of medicine", Gerstung said.

On a larger scale, such tools could help with "optimization of resources across a stretched healthcare system", European Molecular Biology Laboratory co-author Tom Fitzgerald said.

Doctors in many countries already use computer tools to predict risk of disease, such as the QRISK3 program that British family doctors use to assess the danger of heart attack or stroke.

Delphi-2M, by contrast, "can do all diseases at once and over a long time period", said co-author Ewan Birney.

Gustavo Sudre, a King's College London professor specializing in medical AI, commented that the research "looks to be a significant step towards scalable, interpretable and -- most importantly -- ethically responsible predictive modelling".

"Interpretable" or "explainable" AI is one of the top research goals in the field, as the full inner workings of many large AI models currently remain mysterious even to their creators.



SVC Develops AI Intelligence Platform to Strengthen Private Capital Ecosystem

The platform offers customizable analytical dashboards that deliver frequent updates and predictive insights- SPA
The platform offers customizable analytical dashboards that deliver frequent updates and predictive insights- SPA
TT

SVC Develops AI Intelligence Platform to Strengthen Private Capital Ecosystem

The platform offers customizable analytical dashboards that deliver frequent updates and predictive insights- SPA
The platform offers customizable analytical dashboards that deliver frequent updates and predictive insights- SPA

Saudi Venture Capital Company (SVC) announced the launch of its proprietary intelligence platform, Aian, developed in-house using Saudi national expertise to enhance its institutional role in developing the Kingdom’s private capital ecosystem and supporting its mandate as a market maker guided by data-driven growth principles.

According to a press release issued by the SVC today, Aian is a custom-built AI-powered market intelligence capability that transforms SVC’s accumulated institutional expertise and detailed private market data into structured, actionable insights on market dynamics, sector evolution, and capital formation. The platform converts institutional memory into compounding intelligence, enabling decisions that integrate both current market signals and long-term historical trends, SPA reported.

Deputy CEO and Chief Investment Officer Nora Alsarhan stated that as Saudi Arabia’s private capital market expands, clarity, transparency, and data integrity become as critical as capital itself. She noted that Aian represents a new layer of national market infrastructure, strengthening institutional confidence, enabling evidence-based decision-making, and supporting sustainable growth.

By transforming data into actionable intelligence, she said, the platform reinforces the Kingdom’s position as a leading regional private capital hub under Vision 2030.

She added that market making extends beyond capital deployment to shaping the conditions under which capital flows efficiently, emphasizing that the next phase of market development will be driven by intelligence and analytical insight alongside investment.

Through Aian, SVC is building the knowledge backbone of Saudi Arabia’s private capital ecosystem, enabling clearer visibility, greater precision in decision-making, and capital formation guided by insight rather than assumption.

Chief Strategy Officer Athary Almubarak said that in private capital markets, access to reliable insight increasingly represents the primary constraint, particularly in emerging and fast-scaling markets where disclosures vary and institutional knowledge is fragmented.

She explained that for development-focused investment institutions, inconsistent data presents a structural challenge that directly impacts capital allocation efficiency and the ability to crowd in private investment at scale.

She noted that SVC was established to address such market frictions and that, as a government-backed investor with an explicit market-making mandate, its role extends beyond financing to building the enabling environment in which private capital can grow sustainably.

By integrating SVC’s proprietary portfolio data with selected external market sources, Aian enables continuous consolidation and validation of market activity, producing a dynamic representation of capital deployment over time rather than relying solely on static reporting.

The platform offers customizable analytical dashboards that deliver frequent updates and predictive insights, enabling SVC to identify priority market gaps, recalibrate capital allocation, design targeted ecosystem interventions, and anchor policy dialogue in evidence.

The release added that Aian also features predictive analytics capabilities that anticipate upcoming funding activity, including projected investment rounds and estimated ticket sizes. In addition, it incorporates institutional benchmarking tools that enable structured comparisons across peers, sectors, and interventions, supporting more precise, data-driven ecosystem development.


Job Threats, Rogue Bots: Five Hot Issues in AI

A Delhi police officer outside the venue of the 'India AI Impact Summit 2026'. Arun SANKAR / AFP
A Delhi police officer outside the venue of the 'India AI Impact Summit 2026'. Arun SANKAR / AFP
TT

Job Threats, Rogue Bots: Five Hot Issues in AI

A Delhi police officer outside the venue of the 'India AI Impact Summit 2026'. Arun SANKAR / AFP
A Delhi police officer outside the venue of the 'India AI Impact Summit 2026'. Arun SANKAR / AFP

As artificial intelligence evolves at a blistering pace, world leaders and thousands of other delegates will discuss how to handle the technology at the AI Impact Summit, which opens Monday in New Delhi.

Here are five big issues on the agenda:

Job loss fears

Generative AI threatens to disrupt myriad industries, from software development and factory work to music and the movies.

India -- with its large customer service and tech support sectors -- could be vulnerable, and shares in the country's outsourcing firms have plunged in recent days, partly due to advances in AI assistant tools.

"Automation, intelligent systems, and data-driven processes are increasingly taking over routine and repetitive tasks, reshaping traditional job structures," the summit's "human capital" working group says.

"While these developments can drive efficiency and innovation, they also risk displacing segments of the workforce," widening socio-economic divides, it warns.

Bad robots

The Delhi summit is the fourth in a series of international AI meetings. The first in 2023 was called the AI Safety Summit, and preventing real-world harm is still a key goal.

In the United States, families of people who have taken their own lives have sued OpenAI, accusing ChatGPT of having contributed to the suicides. The company says it has made efforts to strengthen its safeguards.

Elon Musk's Grok AI tool also recently sparked global outrage and bans in several countries over its ability to create sexualized deepfakes depicting real people, including children, in skimpy clothing.

Other concerns range from copyright violations to scammers using AI tools to produce perfectly spelled phishing emails.

Energy demands

Tech giants are spending hundreds of billions of dollars on AI infrastructure, building data centers packed with cutting-edge microchips, and also, in some cases, nuclear plants to power them.

The International Energy Agency projects that electricity consumption from data centers will double by 2030, fueled by the AI boom.

In 2024, data centers accounted for an estimated 1.5 percent of global electricity consumption, it says.

Alongside concerns over planet-warming carbon emissions are worries about water use to cool the data centers servers, which can lead to shortages on hot days.

Moves to regulate

In South Korea, a wide-ranging law regulating artificial intelligence took effect in January, requiring companies to tell users when products use generative AI.

Many countries are planning similar moves, despite a warning from US Vice President JD Vance last year against "excessive regulation" that could stifle innovation.

The European Union's Artificial Intelligence Act allows regulators to ban AI systems deemed to pose "unacceptable risks" to society.

That could include identifying people in real time in public spaces or evaluating criminal risk based on biometric data alone.

'Everyone dies'

More existential fears have also been expressed by AI insiders who believe the technology is marching towards so-called "Artificial General Intelligence", when machines' abilities match those of humans.

OpenAI and rival startup Anthropic have seen public resignations of staff members who have spoken out about the ethical implications of their technology.

Anthropic warned last week that its latest chatbot models could be nudged towards "knowingly supporting -- in small ways -- efforts toward chemical weapon development and other heinous crimes".

Researcher Eliezer Yudkowsky, author of the 2025 book "If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All" has also compared AI to the development of nuclear weapons.


OpenAI Hires Creator of 'OpenClaw' AI Agent Tool

FILE PHOTO: OpenAI logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo
TT

OpenAI Hires Creator of 'OpenClaw' AI Agent Tool

FILE PHOTO: OpenAI logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI has hired the Austrian creator of OpenClaw, an artificial intelligence tool able to execute real-world tasks, the US startup's head Sam Altman said on Sunday.

AI agent tool OpenClaw has fascinated -- and spooked -- the tech world since researcher Peter Steinberger built it in November to help organize his digital life.

A Reddit-like pseudo social network for OpenClaw agents called Moltbook, where chatbots converse, has also grabbed headlines and provoked soul-searching over AI.

Elon Musk called Moltbook "the very early stages of the singularity" -- a term for the moment when human intelligence is overwhelmed by AI forever, although some people have questioned to what extent humans are manipulating the bots' posts.

Steinberger "is joining OpenAI to drive the next generation of personal agents," Altman wrote in an X post.

"He is a genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people," AFP quoted him as saying.

"We expect this will quickly become core to our product offerings," Altman wrote, saying that OpenClaw would remain an open-source project within a foundation supported by OpenAI.

"The future is going to be extremely multi-agent and it's important to us to support open source as part of that."

Users of OpenClaw download the tool, and connect it to generative AI models such as ChatGPT.

They then communicate with their AI agent through WhatsApp or Telegram, as they would with a friend or colleague.

Many users gush over the tool's futuristic abilities to send emails and buy things online, but others report an overall chaotic experience with added cybersecurity risks.

Only a small percentage of OpenAI's nearly one billion users pay for subscription services, putting pressure on the company to find new revenue sources.

It has begun testing advertisements and sponsored content in the massively popular ChatGPT, spawning privacy concerns as it looks for ways to start balancing its hundreds of billions in spending commitments.