Google Unveils AI Tool Probing Mysteries of Human Genome

A Google logo is seen at a company research facility in Mountain View, California, US, May 13, 2025. (Reuters)
A Google logo is seen at a company research facility in Mountain View, California, US, May 13, 2025. (Reuters)
TT

Google Unveils AI Tool Probing Mysteries of Human Genome

A Google logo is seen at a company research facility in Mountain View, California, US, May 13, 2025. (Reuters)
A Google logo is seen at a company research facility in Mountain View, California, US, May 13, 2025. (Reuters)

Google unveiled an artificial intelligence tool Wednesday that its scientists said would help unravel the mysteries of the human genome -- and could one day lead to new treatments for diseases.

The deep learning model AlphaGenome was hailed by outside researchers as a "breakthrough" that would let scientists study and even simulate the roots of difficult-to-treat genetic diseases.

While the first complete map of the human genome in 2003 "gave us the book of life, reading it remained a challenge", Pushmeet Kohli, vice president of research at Google DeepMind, told journalists.

"We have the text," he said, which is a sequence of three billion nucleotide pairs represented by the letters A, T, C and G that make up DNA.

However, "understanding the grammar of this genome -- what is encoded in our DNA and how it governs life -- is the next critical frontier for research," said Kohli, co-author of a new study in the journal Nature.

Only around two percent of our DNA contains instructions for making proteins, which are the molecules that build and run the body.

The other 98 percent was long dismissed as "junk DNA" as scientists struggled to understand what it was for.

However, this "non-coding DNA" is now believed to act like a conductor, directing how genetic information works in each of our cells.

These sequences also contain many variants that have been associated with diseases. It is these sequences that AlphaGenome is aiming to understand.

- A million letters -

The project is just one part of Google's AI-powered scientific work, which also includes AlphaFold, the winner of 2024's chemistry Nobel.

AlphaGenome's model was trained on data from public projects that measured non-coding DNA across hundreds of different cell and tissue types in humans and mice.

The tool is able to analyze long DNA sequences then predict how each nucleotide pair will influence different biological processes within the cell.

This includes whether genes start and stop and how much RNA -- molecules which transmit genetic instructions inside cells -- is produced.

Other models already exist that have a similar aim. However, they have to compromise, either by analyzing far shorter DNA sequences or decreasing how detailed their predictions are, known as resolution.

DeepMind scientist and lead study author Ziga Avsec said that long sequences -- up to a million DNA letters long -- were "required to understand the full regulatory environment of a single gene".

And the high resolution of the model allows scientists to study the impact of genetic variants by comparing the differences between mutated and non-mutated sequences.

"AlphaGenome can accelerate our understanding of the genome by helping to map where the functional elements are and what their roles are on a molecular level," study co-author Natasha Latysheva said.

The model has already been tested by 3,000 scientists across 160 countries and is open for anyone to use for non-commercial reasons, Google said.

"We hope researchers will extend it with more data," Kohli added.

- 'Breakthrough' -

Ben Lehner, a researcher at Cambridge University who was not involved in developing AlphaGenome but did test it, said the model "does indeed perform very well".

"Identifying the precise differences in our genomes that make us more or less likely to develop thousands of diseases is a key step towards developing better therapeutics," he explained.

However, AlphaGenome "is far from perfect and there is still a lot of work to do", he added.

"AI models are only as good as the data used to train them" and the existing data is not very suitable, he said.

Robert Goldstone, head of genomics at the UK's Francis Crick Institute, cautioned that AlphaGenome was "not a magic bullet for all biological questions".

This was partly because "gene expression is influenced by complex environmental factors that the model cannot see", he said.

However, the tool still represented a "breakthrough" that would allow scientists to "study and simulate the genetic roots of complex disease", Goldstone added.



Hong Kong Scientists Launch AI Model to Better Predict Extreme Weather

A general view of Two International Finance Centre (IFC), HSBC headquarters and Bank of China in Hong Kong, China July 13, 2021. (Reuters)
A general view of Two International Finance Centre (IFC), HSBC headquarters and Bank of China in Hong Kong, China July 13, 2021. (Reuters)
TT

Hong Kong Scientists Launch AI Model to Better Predict Extreme Weather

A general view of Two International Finance Centre (IFC), HSBC headquarters and Bank of China in Hong Kong, China July 13, 2021. (Reuters)
A general view of Two International Finance Centre (IFC), HSBC headquarters and Bank of China in Hong Kong, China July 13, 2021. (Reuters)

A team of Hong Kong scientists has developed an artificial intelligence weather-forecasting system to predict thunderstorms and heavy downpours up to four hours ahead, ​compared with the range of 20 minutes to two hours now.

The system will help governments and emergency services respond more effectively to increasingly frequent extremes of weather linked to climate change, the team from Hong Kong University of Science and Technology said on Wednesday.

"We hope to use AI and satellite data to improve prediction of extreme weather ‌so we can ‌be better prepared," said Su ‌Hui, chair ⁠professor ​of ‌the university's civil and environmental engineering department, who led the project.

The system aimed to predict heavy rainfall, Su told a press conference to describe the work published in the Proceedings of the National Academy of Sciences in December.

Its model applies generative AI techniques, injecting noise into training data so that the ⁠system learns to reverse the process in the effort to produce more ‌precise forecasts.

Developed in collaboration with China’s ‍weather authorities, it refreshes forecasts ‍every 15 minutes and has boosted accuracy by more ‍than 15%, the team said.

Such work is crucial because the number of typhoons and episodes of wet weather Hong Kong and much of southern China faced in 2025 far exceeded the seasonal ​norm, scientists said.

The city issued its highest rainstorm warning five times last year and the second ⁠highest 16 times, setting new records, its observatory said.

Both China's Meteorological Administration and Hong Kong's Observatory are working to incorporate the model into forecasts.

The team's new AI framework, called the Deep Diffusion Model based on Satellite Data (DDMS), was trained using infrared brightness temperature data collected between 2018 and 2021 by China’s Fengyun-4 satellite.

Satellites can detect cloud formation earlier than other forecasting systems such as radar, Su added.

The data was combined with meteorological expertise to capture the evolution of convective cloud ‌systems and later validated with spring and summer samples from 2022 and 2023.


Will the EU Ban Social Media for Children in 2026?

The Instagram logo displayed on a mobile phone alongside a laptop keyboard in Liverpool, Britain, 23 January 2026. (EPA)
The Instagram logo displayed on a mobile phone alongside a laptop keyboard in Liverpool, Britain, 23 January 2026. (EPA)
TT

Will the EU Ban Social Media for Children in 2026?

The Instagram logo displayed on a mobile phone alongside a laptop keyboard in Liverpool, Britain, 23 January 2026. (EPA)
The Instagram logo displayed on a mobile phone alongside a laptop keyboard in Liverpool, Britain, 23 January 2026. (EPA)

As France moves one step closer to banning social media for children, the European Union is seriously considering whether it's time for the bloc to follow suit.

Pressure has been rising since Australia's social media ban for under-16s entered into force, and Brussels is keeping a close eye on how successful it proves, with the ban already facing legal challenges.

France had been spearheading a months-long push for similar EU action alongside member states including Denmark, Greece and Spain -- before deciding to strike out on its own. Its lower house of parliament this week passed a bill that would ban social media use by under-15s, which still needs Senate approval to become law.

At EU level, tough rules already regulate the digital space, with multiple probes ongoing into the impact on children of platforms including Instagram and TikTok.

European Commission chief Ursula von der Leyen has advocated going further with a minimum age limit, but first wants to hear from experts on what approach the 27-nation bloc should take.

- 'All doors open' -

Promised by the end of 2025, a consultative panel on social media use promised by von der Leyen is now expected to be set up "early" this year.

Its objective? To advise the president on what the EU's next steps should be to further protect children online, commission spokesman Thomas Regnier said.

"We're leaving all doors open. We will get feedback, and then we will take potential future decisions in this regard," Regnier said on Tuesday.

The European Parliament has already called for a social media ban on under-16s -- with Malaysia, Norway and New Zealand also planning similar restrictions.

France isn't alone in opting not to wait for EU-level action.

Denmark last year said it would ban access to social media for minors under 15.

Both countries are among five EU states currently testing an age-verification app they hope will prevent children accessing harmful content online.

Commission spokesman Regnier said that tool, which is to be rolled out by the end of the year, would be a way for Brussels to enforce compliance with whatever rules are adopted at national level, in France or elsewhere.

- EU vows to 'close cases' -

While the EU has yet to ban children from social media, its content law known as the Digital Services Act (DSA) gives regulators the power to force companies to modify their platforms to better protect minors online.

For example, the DSA bans targeted advertising to children.

The EU can "use the DSA to impact the way that children interact with social media", Paul Oliver Richter, affiliate fellow at the Bruegel think tank said.

In February and May 2024 respectively, the EU launched probes into TikTok, and Meta's Facebook and Instagram, over fears the platforms may not be doing enough to address negative impacts on young people.

In both investigations, the EU expressed fears over the so-called "rabbit hole" effect -- which occurs when users are fed related content based on an algorithm, in some cases leading to more extreme content.

Nearly two years on, the EU has yet to wrap up the probes, although one official says regulators hope to deliver preliminary findings in the first half of the year.

EU spokesman Regnier has insisted "work is heavily ongoing".

Without referring to any specific probes, he said that "for certain investigations, we need more time", but added: "We will close these cases."


Meta, TikTok and YouTube Face Landmark Trial over Youth Addiction Claims

FILE PHOTO: The logo of Meta is seen at Porte de Versailles exhibition center in Paris, France, June 11, 2025. REUTERS/Gonzalo Fuentes/File Photo
FILE PHOTO: The logo of Meta is seen at Porte de Versailles exhibition center in Paris, France, June 11, 2025. REUTERS/Gonzalo Fuentes/File Photo
TT

Meta, TikTok and YouTube Face Landmark Trial over Youth Addiction Claims

FILE PHOTO: The logo of Meta is seen at Porte de Versailles exhibition center in Paris, France, June 11, 2025. REUTERS/Gonzalo Fuentes/File Photo
FILE PHOTO: The logo of Meta is seen at Porte de Versailles exhibition center in Paris, France, June 11, 2025. REUTERS/Gonzalo Fuentes/File Photo

Three of the world's biggest tech companies face a landmark trial in Los Angeles starting this week over claims that their platforms — Meta's Instagram, ByteDance's TikTok and Google's YouTube — deliberately addict and harm children.

Jury selection starts this week in the Los Angeles County Superior Court. It's the first time the companies will argue their case before a jury, and the outcome could have profound effects on their businesses and how they will handle children using their platforms. The selection process is expected to take at least a few days, with 75 potential jurors questioned each day through at least Thursday. A fourth company named in the lawsuit, Snapchat parent company Snap Inc., settled the case last week for an undisclosed sum.

At the core of the case is a 19-year-old identified only by the initials “KGM," whose case could determine how thousands of other, similar lawsuits against social media companies will play out. She and two other plaintiffs have been selected for bellwether trials — essentially test cases for both sides to see how their arguments play out before a jury and what damages, if any, may be awarded, said Clay Calvert, a nonresident senior fellow of technology policy studies at the American Enterprise Institute.

KGM claims that her use of social media from an early age addicted her to the technology and exacerbated depression and suicidal thoughts. Importantly, the lawsuit claims that this was done through deliberate design choices made by companies that sought to make their platforms more addictive to children to boost profits. This argument, if successful, could sidestep the companies' First Amendment shield and Section 230, which protects tech companies from liability for material posted on their platforms.

“Borrowing heavily from the behavioral and neurobiological techniques used by slot machines and exploited by the cigarette industry, Defendants deliberately embedded in their products an array of design features aimed at maximizing youth engagement to drive advertising revenue,” the lawsuit says.

Executives, including Meta CEO Mark Zuckerberg, are expected to testify at the trial, which will last six to eight weeks. Experts have drawn similarities to the Big Tobacco trials that led to a 1998 settlement requiring cigarette companies to pay billions in healthcare costs and restrict marketing targeting minors.

“Plaintiffs are not merely the collateral damage of Defendants’ products,” the lawsuit says. “They are the direct victims of the intentional product design choices made by each Defendant. They are the intended targets of the harmful features that pushed them into self-destructive feedback loops.”

The tech companies dispute the claims that their products deliberately harm children, citing a bevy of safeguards they have added over the years and arguing that they are not liable for content posted on their sites by third parties.

“Recently, a number of lawsuits have attempted to place the blame for teen mental health struggles squarely on social media companies,” Meta said in a recent blog post. "But this oversimplifies a serious issue. Clinicians and researchers find that mental health is a deeply complex and multifaceted issue, and trends regarding teens' well-being aren't clear-cut or universal. Narrowing the challenges faced by teens to a single factor ignores the scientific research and the many stressors impacting young people today, like academic pressure, school safety, socio-economic challenges and substance abuse."

Meta, YouTube and TikTok did not immediately respond to requests for comment Monday.

The case will be the first in a slew of cases beginning this year that seek to hold social media companies responsible for harming children's mental well-being. A federal bellwether trial beginning in June in Oakland, California, will be the first to represent school districts that have sued social media platforms over harms to children.

In addition, more than 40 state attorneys general have filed lawsuits against Meta, claiming it is harming young people and contributing to the youth mental health crisis by deliberately designing features on Instagram and Facebook that addict children to its platforms. The majority of cases filed their lawsuits in federal court, but some sued in their respective states.

TikTok also faces similar lawsuits in more than a dozen states.