CRISPR, 10 Years On: Learning to Rewrite the Code of Life

CRISPR, 10 Years On: Learning to Rewrite the Code of Life
TT

CRISPR, 10 Years On: Learning to Rewrite the Code of Life

CRISPR, 10 Years On: Learning to Rewrite the Code of Life

Ten years ago this week, Jennifer Doudna and her colleagues published the results of a test-tube experiment on bacterial genes. When the study came out in the journal Science on June 28, 2012, it did not make headline news. In fact, over the next few weeks, it did not make any news at all.

Looking back, Dr. Doudna wondered if the oversight had something to do with the wonky title she and her colleagues had chosen for the study: “A Programmable Dual RNA-Guided DNA Endonuclease in Adaptive Bacterial Immunity.”

“I suppose if I were writing the paper today, I would have chosen a different title,” Dr. Doudna, a biochemist at the University of California, Berkeley, said in an interview.

Far from an esoteric finding, the discovery pointed to a new method for editing DNA, one that might even make it possible to change human genes.

“I remember thinking very clearly, when we publish this paper, it’s like firing the starting gun at a race,” she said.

In just a decade, CRISPR has become one of the most celebrated inventions in modern biology. It is swiftly changing how medical researchers study diseases: Cancer biologists are using the method to discover hidden vulnerabilities of tumor cells. Doctors are using CRISPR to edit genes that cause hereditary diseases.

“The era of human gene editing isn’t coming,” said David Liu, a biologist at Harvard University. “It’s here.”

But CRISPR’s influence extends far beyond medicine. Evolutionary biologists are using the technology to study Neanderthal brains and to investigate how our ape ancestors lost their tails. Plant biologists have edited seeds to produce crops with new vitamins or with the ability to withstand diseases. Some of them may reach supermarket shelves in the next few years.

CRISPR has had such a quick impact that Dr. Doudna and her collaborator, Emmanuelle Charpentier of the Max Planck Unit for the Science of Pathogens in Berlin, won the 2020 Nobel Prize for chemistry. The award committee hailed their 2012 study as “an epoch-making experiment.”

Dr. Doudna recognized early on that CRISPR would pose a number of thorny ethical questions, and after a decade of its development, those questions are more urgent than ever.

Will the coming wave of CRISPR-altered crops feed the world and help poor farmers or only enrich agribusiness giants that invest in the technology? Will CRISPR-based medicine improve health for vulnerable people across the world, or come with a million-dollar price tag?

The most profound ethical question about CRISPR is how future generations might use the technology to alter human embryos. This notion was simply a thought experiment until 2018, when He Jiankui, a biophysicist in China, edited a gene in human embryos to confer resistance to H.I.V. Three of the modified embryos were implanted in women in the Chinese city of Shenzhen.

In 2019, a court sentenced Dr. He to prison for “illegal medical practices.” MIT Technology Review reported in April that he had recently been released. Little is known about the health of the three children, who are now toddlers.

Scientists don’t know of anyone else who has followed Dr. He’s example — yet. But as CRISPR continues to improve, editing human embryos may eventually become a safe and effective treatment for a variety of diseases.

Will it then become acceptable, or even routine, to repair disease-causing genes in an embryo in the lab? What if parents wanted to insert traits that they found more desirable — like those related to height, eye color or intelligence?

Françoise Baylis, a bioethicist at Dalhousie University in Nova Scotia, worries that the public is still not ready to grapple with such questions.

“I’m skeptical about the depth of understanding about what’s at issue there,” she said. “There’s a difference between making people better and making better people.”

Dr. Doudna and Dr. Charpentier did not invent their gene-editing method from scratch. They borrowed their molecular tools from bacteria.

In the 1980s, microbiologists discovered puzzling stretches of DNA in bacteria, later called Clustered Regularly Interspaced Short Palindromic Repeats. Further research revealed that bacteria used these CRISPR sequences as weapons against invading viruses.

The bacteria turned these sequences into genetic material, called RNA, that could stick precisely to a short stretch of an invading virus’s genes. These RNA molecules carry proteins with them that act like molecular scissors, slicing the viral genes and halting the infection.

As Dr. Doudna and Dr. Charpentier investigated CRISPR, they realized that the system might allow them to cut a sequence of DNA of their own choosing. All they needed to do was make a matching piece of RNA.

To test this revolutionary idea, they created a batch of identical pieces of DNA. They then crafted another batch of RNA molecules, programming all of them to home in on the same spot on the DNA. Finally, they mixed the DNA, the RNA and molecular scissors together in test tubes. They discovered that many of the DNA molecules had been cut at precisely the right spot.

For months Dr. Doudna oversaw a series of round-the-clock experiments to see if CRISPR might work not only in a test tube, but also in living cells. She pushed her team hard, suspecting that many other scientists were also on the chase. That hunch soon proved correct.

In January 2013, five teams of scientists published studies in which they successfully used CRISPR in living animal or human cells. Dr. Doudna did not win that race; the first two published papers came from two labs in Cambridge, Mass. — one at the Broad Institute of M.I.T. and Harvard, and the other at Harvard.

Lukas Dow, a cancer biologist at Weill Cornell Medicine, vividly remembers learning about CRISPR’s potential. “Reading the papers, it looked amazing,” he recalled.

Dr. Dow and his colleagues soon found that the method reliably snipped out pieces of DNA in human cancer cells.

“It became a verb to drop,” Dr. Dow said. “A lot of people would say, ‘Did you CRISPR that?’”

Cancer biologists began systematically altering every gene in cancer cells to see which ones mattered to the disease. Researchers at KSQ Therapeutics, also in Cambridge, used CRISPR to discover a gene that is essential for the growth of certain tumors, for example, and last year, they began a clinical trial of a drug that blocks the gene.

Caribou Biosciences, co-founded by Dr. Doudna, and CRISPR Therapeutics, co-founded by Dr. Charpentier, are both running clinical trials for CRISPR treatments that fight cancer in another way: by editing immune cells to more aggressively attack tumors.

Those companies and several others are also using CRISPR to try to reverse hereditary diseases. On June 12, researchers from CRISPR Therapeutics and Vertex, a Boston-based biotech firm, presented at a scientific meeting new results from their clinical trial involving 75 volunteers who had sickle-cell anemia or beta thalassemia. These diseases impair hemoglobin, a protein in red blood cells that carries oxygen.

The researchers took advantage of the fact that humans have more than one hemoglobin gene. One copy, called fetal hemoglobin, is typically active only in fetuses, shutting down within a few months after birth.

The researchers extracted immature blood cells from the bone marrow of the volunteers. They then used CRISPR to snip out the switch that would typically turn off the fetal hemoglobin gene. When the edited cells were returned to patients, they could develop into red blood cells rife with hemoglobin.

Speaking at a hematology conference, the researchers reported that out of 44 treated patients with beta thalassemia, 42 no longer needed regular blood transfusions. None of the 31 sickle cell patients experienced painful drops in oxygen that would have normally sent them to the hospital.
CRISPR Therapeutics and Vertex expect to ask government regulators by the end of year to approve the treatment.

Other companies are injecting CRISPR molecules directly into the body. Intellia Therapeutics, based in Cambridge and also co-founded by Dr. Doudna, has teamed up with Regeneron, based in Westchester County, N.Y., to begin a clinical trial to treat transthyretin amyloidosis, a rare disease in which a damaged liver protein becomes lethal as it builds up in the blood.

Doctors injected CRISPR molecules into the volunteers’ livers to shut down the defective gene. Speaking at a scientific conference last Friday, Intellia researchers reported that a single dose of the treatment produced a significant drop in the protein level in volunteers’ blood for as long as a year thus far.

The same technology that allows medical researchers to tinker with human cells is letting agricultural scientists alter crop genes. When the first wave of CRISPR studies came out, Catherine Feuillet, an expert on wheat, who was then at the French National Institute for Agricultural Research, immediately saw its potential for her own work.

“I said, ‘Oh my God, we have a tool,’” she said. “We can put breeding on steroids.”

At Inari Agriculture, a company in Cambridge, Dr. Feuillet is overseeing efforts to use CRISPR to make breeds of soybeans and other crops that use less water and fertilizer. Outside of the United States, British researchers have used CRISPR to breed a tomato that can produce vitamin D.

Kevin Pixley, a plant scientist at the International Maize and Wheat Improvement Center in Mexico City, said that CRISPR is important to plant breeding not only because it’s powerful, but because it’s relatively cheap. Even small labs can create disease-resistant cassavas or drought-resistant bananas, which could benefit poor nations but would not interest companies looking for hefty financial returns.

Because of CRISPR’s use for so many different industries, its patent has been the subject of a long-running dispute. Groups led by the Broad Institute and the University of California both filed patents for the original version of gene editing based on CRISPR-Cas9 in living cells. The Broad Institute won a patent in 2014, and the University of California responded with a court challenge.

In February of this year, the US Patent Trial and Appeal Board issued what is most likely the final word on this dispute. They ruled in favor of the Broad Institute.

Jacob Sherkow, an expert on biotech patents at the University of Illinois College of Law, predicted that companies that have licensed the CRISPR technology from the University of California will need to honor the Broad Institute patent.

“The big-ticket CRISPR companies, the ones that are farthest along in clinical trials, are almost certainly going to need to write the Broad Institute a really big check,” he said.

The original CRISPR system, known as CRISPR-Cas9, leaves plenty of room for improvement. The molecules are good at snipping out DNA, but they’re not as good at inserting new pieces in their place. Sometimes CRISPR-Cas9 misses its target, cutting DNA in the wrong place. And even when the molecules do their jobs correctly, cells can make mistakes as they repair the loose ends of DNA left behind.

A number of scientists have invented new versions of CRISPR that overcome some of these shortcomings. At Harvard, for example, Dr. Liu and his colleagues have used CRISPR to make a nick in one of DNA’s two strands, rather than breaking them entirely. This process, known as base editing, lets them precisely change a single genetic letter of DNA with much less risk of genetic damage.

Dr. Liu has co-founded a company called Beam Therapeutics to create base-editing drugs. Later this year, the company will test its first drug on people with sickle cell anemia.

Dr. Liu and his colleagues have also attached CRISPR molecules to a protein that viruses use to insert their genes into their host’s DNA. This new method, called prime editing, could enable CRISPR to alter longer stretches of genetic material.

“Prime editors are kind of like DNA word processors,” Dr. Liu said. “They actually perform a search and replace function on DNA.”

Rodolphe Barrangou, a CRISPR expert at North Carolina State University and a founder of Intellia Therapeutics, predicted that prime editing would eventually become a part of the standard CRISPR toolbox. But for now, he said, the technique was still too complex to become widely used. “It’s not quite ready for prime time, pun intended,” he said.

Advances like prime editing didn’t yet exist in 2018, when Dr. He set out to edit human embryos in Shenzen. He used the standard CRISPR-Cas9 system that Dr. Doudna and others had developed years before.

Dr. He hoped to endow babies with resistance to H.I.V. by snipping a piece of a gene called CCR5 from the DNA of embryos. People who naturally carry the same mutation rarely get infected by H.I.V.

In November 2018, Dr. He announced that a pair of twin girls had been born with his gene edits. The announcement took many scientists like Dr. Doudna by surprise, and they roundly condemned him for putting the health of the babies in jeopardy with untested procedures.

Dr. Baylis of Dalhousie University criticized Dr. He for the way he reportedly presented the procedure to the parents, downplaying the radical experiment they were about to undertake. “You could not get an informed consent, unless you were saying, ‘This is pie in the sky. Nobody’s ever done it,’” she said.

In the nearly four years since Dr. He’s announcement, scientists have continued to use CRISPR on human embryos. But they have studied embryos only when they’re tiny clumps of cells to find clues about the earliest stages of development. These studies could potentially lead to new treatments for infertility.

Bieke Bekaert, a graduate student in reproductive biology at Ghent University in Belgium, said that CRISPR remains challenging to use in human embryos. Breaking DNA in these cells can lead to drastic rearrangements in the chromosomes. “It’s more difficult than we thought,” said Ms. Bekaert, the lead author of a recent review of the subject. “We don’t really know what is happening.”

Still, Ms. Bekaert held out hope that prime editing and other improvements on CRISPR could allow scientists to make reliably precise changes to human embryos. “Five years is way too early, but I think in my lifetime it may happen,” she said.

The New York Times



US Requiring New AI Safeguards for Government Use, Transparency

An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song/File Photo
An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song/File Photo
TT

US Requiring New AI Safeguards for Government Use, Transparency

An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song/File Photo
An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song/File Photo

The White House said Thursday it is requiring federal agencies using artificial intelligence to adopt "concrete safeguards" by Dec. 1 to protect Americans’ rights and ensure safety as the government expands AI use in a wide range of applications.
The Office of Management and Budget issued a directive to federal agencies to monitor, assess and test AI’s impacts "on the public, mitigate the risks of algorithmic discrimination, and provide the public with transparency into how the government uses AI." Agencies must also conduct risk assessments and set operational and governance metrics, Reuters said.
The White House said agencies "will be required to implement concrete safeguards when using AI in a way that could impact Americans' rights or safety" including detailed public disclosures so the public knows how and when artificial intelligence is being used by the government.
President Joe Biden signed an executive order in October invoking the Defense Production Act to require developers of AI systems posing risks to US national security, the economy, public health or safety to share the results of safety tests with the US government before publicly released.
The White House on Thursday said new safeguards will ensure air travelers can opt out from Transportation Security Administration facial recognition use without delay in screening. When AI is used in federal healthcare to support diagnostics decisions a human must oversee "the process to verify the tools’ results."
Generative AI - which can create text, photos and videos in response to open-ended prompts - has spurred excitement as well as fears it could lead to job losses, upend elections and potentially overpower humans and catastrophic effects.
The White House is requiring government agencies to release inventories of AI use cases, report metrics about AI use and release government-owned AI code, models, and data if it does not pose risks.
The Biden administration cited ongoing federal AI uses, including the Federal Emergency Management Agency employing AI to assess structural hurricane damage, while the Centers for Disease Control and Prevention uses AI to predict spread of disease and detect opioid use. The Federal Aviation Administration is using AI to help "deconflict air traffic in major metropolitan areas to improve travel time."
The White House plans to hire 100 AI professionals to promote the safe use of AI and is requiring federal agencies to designate chief AI officers within 60 days.
In January, the Biden administration proposed requiring US cloud companies to determine whether foreign entities are accessing US data centers to train AI models through "know your customer" rules.


Apple Announces Worldwide Developers Conference Dates, In-Person Event

 The Apple logo hangs in front of an Apple store on March 21, 2024 in Chicago, Illinois. (Getty Images/AFP)
The Apple logo hangs in front of an Apple store on March 21, 2024 in Chicago, Illinois. (Getty Images/AFP)
TT

Apple Announces Worldwide Developers Conference Dates, In-Person Event

 The Apple logo hangs in front of an Apple store on March 21, 2024 in Chicago, Illinois. (Getty Images/AFP)
The Apple logo hangs in front of an Apple store on March 21, 2024 in Chicago, Illinois. (Getty Images/AFP)

Apple has announced their annual developers conference will take place June 10 through June 14.

The big summer event will be live-streamed, but some select developers have been invited to attend in-person events at Apple's campus in Cupertino, California, on June 10.

The company typically showcases their latest software and product updates — including the iPhone, iPad, Apple Watch, AppleTV and Vision Pro headset — during a keynote address on the first day.

Contributing to a drop in Apple’s stock price this year is concern it lags behind Microsoft and Google in the push to develop products powered by artificial intelligence technology.

While Apple tends to keep its product development close to the vest, CEO Tim Cook signaled at the company’s annual shareholder meeting in February that it has been making big investments in generative AI and plans to disclose more later this year.

The week-long conference will have opportunities for developers to connect with Apple designers and engineers to gain insight into new tools, frameworks and features, according to the company's announcement.


EU to Investigate Apple, Google, Meta for Potential Digital Markets Act Breaches

FILE - The Apple logo is illuminated at a store in Munich, Germany, Monday, Nov. 13, 2023.  (AP Photo/Matthias Schrader, File)
FILE - The Apple logo is illuminated at a store in Munich, Germany, Monday, Nov. 13, 2023. (AP Photo/Matthias Schrader, File)
TT

EU to Investigate Apple, Google, Meta for Potential Digital Markets Act Breaches

FILE - The Apple logo is illuminated at a store in Munich, Germany, Monday, Nov. 13, 2023.  (AP Photo/Matthias Schrader, File)
FILE - The Apple logo is illuminated at a store in Munich, Germany, Monday, Nov. 13, 2023. (AP Photo/Matthias Schrader, File)

EU antitrust regulators on Monday opened their first investigations under the Digital Markets Act into Apple, Alphabet's Google and Meta Platforms for potential breaches of the landmark EU tech rules.

"The (European) Commission suspects that the measures put in place by these gatekeepers fall short of effective compliance of their obligations under the DMA," the EU executive said in a statement.

The EU competition enforcer will investigate Alphabet's rules on steering in Google Play and self-preferencing on Google Search, Apple's rules on steering in the App Store and the choice screen for Safari and Meta's 'pay or consent model'.

The Commission also launched investigatory steps relating to Apple's new fee structure for alternative app stores and Amazon's ranking practices on its marketplace.


Apple Vision Pro to Hit Mainland China this Year

Apple CEO Tim Cook speaks during a parallel session of the China Development Forum at the Diaoyutai State Guesthouse in Beijing, China, on Sunday, March 24, 2024. (AP Photo/Tatan Syuflana)
Apple CEO Tim Cook speaks during a parallel session of the China Development Forum at the Diaoyutai State Guesthouse in Beijing, China, on Sunday, March 24, 2024. (AP Photo/Tatan Syuflana)
TT

Apple Vision Pro to Hit Mainland China this Year

Apple CEO Tim Cook speaks during a parallel session of the China Development Forum at the Diaoyutai State Guesthouse in Beijing, China, on Sunday, March 24, 2024. (AP Photo/Tatan Syuflana)
Apple CEO Tim Cook speaks during a parallel session of the China Development Forum at the Diaoyutai State Guesthouse in Beijing, China, on Sunday, March 24, 2024. (AP Photo/Tatan Syuflana)

Apple Vision Pro will hit the mainland China market this year, Apple chief executive Tim Cook said on Sunday, according to state media.

Cook revealed the headset's China launch plan in response to a media question on the sidelines of the China Development Forum in Beijing, CCTV finance said on its Weibo social account.

Apple will continue to ramp up research and development investment in China, he was quoted as saying.


AI Chatbots are Here to Help with Your Mental Health, despite Limited Evidence they Work

Representation photo: The word Pegasus and binary code are displayed on a smartphone which is placed on a keyboard in this illustration taken May 4, 2022. (Reuters)
Representation photo: The word Pegasus and binary code are displayed on a smartphone which is placed on a keyboard in this illustration taken May 4, 2022. (Reuters)
TT

AI Chatbots are Here to Help with Your Mental Health, despite Limited Evidence they Work

Representation photo: The word Pegasus and binary code are displayed on a smartphone which is placed on a keyboard in this illustration taken May 4, 2022. (Reuters)
Representation photo: The word Pegasus and binary code are displayed on a smartphone which is placed on a keyboard in this illustration taken May 4, 2022. (Reuters)

Download the mental health chatbot Earkick and you’re greeted by a bandana-wearing panda who could easily fit into a kids' cartoon.
Start talking or typing about anxiety and the app generates the kind of comforting, sympathetic statements therapists are trained to deliver. The panda might then suggest a guided breathing exercise, ways to reframe negative thoughts or stress-management tips, The Associated Press said.
It's all part of a well-established approach used by therapists, but please don’t call it therapy, says Earkick co-founder Karin Andrea Stephan.
“When people call us a form of therapy, that’s OK, but we don’t want to go out there and tout it,” says Stephan, a former professional musician and self-described serial entrepreneur. “We just don’t feel comfortable with that.”
The question of whether these artificial intelligence -based chatbots are delivering a mental health service or are simply a new form of self-help is critical to the emerging digital health industry — and its survival.
Earkick is one of hundreds of free apps that are being pitched to address a crisis in mental health among teens and young adults. Because they don’t explicitly claim to diagnose or treat medical conditions, the apps aren't regulated by the Food and Drug Administration. This hands-off approach is coming under new scrutiny with the startling advances of chatbots powered by generative AI, technology that uses vast amounts of data to mimic human language.
The industry argument is simple: Chatbots are free, available 24/7 and don’t come with the stigma that keeps some people away from therapy.
But there’s limited data that they actually improve mental health. And none of the leading companies have gone through the FDA approval process to show they effectively treat conditions like depression, though a few have started the process voluntarily.
“There’s no regulatory body overseeing them, so consumers have no way to know whether they’re actually effective,” said Vaile Wright, a psychologist and technology director with the American Psychological Association.
Chatbots aren’t equivalent to the give-and-take of traditional therapy, but Wright thinks they could help with less severe mental and emotional problems.
Earkick’s website states that the app does not “provide any form of medical care, medical opinion, diagnosis or treatment.”
Some health lawyers say such disclaimers aren’t enough.
“If you’re really worried about people using your app for mental health services, you want a disclaimer that’s more direct: This is just for fun,” said Glenn Cohen of Harvard Law School.
Still, chatbots are already playing a role due to an ongoing shortage of mental health professionals.
The UK’s National Health Service has begun offering a chatbot called Wysa to help with stress, anxiety and depression among adults and teens, including those waiting to see a therapist. Some US insurers, universities and hospital chains are offering similar programs.
Dr. Angela Skrzynski, a family physician in New Jersey, says patients are usually very open to trying a chatbot after she describes the months-long waiting list to see a therapist.
Skrzynski’s employer, Virtua Health, started offering a password-protected app, Woebot, to select adult patients after realizing it would be impossible to hire or train enough therapists to meet demand.
“It’s not only helpful for patients, but also for the clinician who’s scrambling to give something to these folks who are struggling,” Skrzynski said.
Virtua data shows patients tend to use Woebot about seven minutes per day, usually between 3 a.m. and 5 a.m.
Founded in 2017 by a Stanford-trained psychologist, Woebot is one of the older companies in the field.
Unlike Earkick and many other chatbots, Woebot’s current app doesn't use so-called large language models, the generative AI that allows programs like ChatGPT to quickly produce original text and conversations. Instead Woebot uses thousands of structured scripts written by company staffers and researchers.
Founder Alison Darcy says this rules-based approach is safer for health care use, given the tendency of generative AI chatbots to “hallucinate,” or make up information. Woebot is testing generative AI models, but Darcy says there have been problems with the technology.
“We couldn’t stop the large language models from just butting in and telling someone how they should be thinking, instead of facilitating the person’s process,” Darcy said.
Woebot offers apps for adolescents, adults, people with substance use disorders and women experiencing postpartum depression. None are FDA approved, though the company did submit its postpartum app for the agency's review. The company says it has “paused” that effort to focus on other areas.
Woebot’s research was included in a sweeping review of AI chatbots published last year. Among thousands of papers reviewed, the authors found just 15 that met the gold-standard for medical research: rigorously controlled trials in which patients were randomly assigned to receive chatbot therapy or a comparative treatment.
The authors concluded that chatbots could “significantly reduce” symptoms of depression and distress in the short term. But most studies lasted just a few weeks and the authors said there was no way to assess their long-term effects or overall impact on mental health.
Other papers have raised concerns about the ability of Woebot and other apps to recognize suicidal thinking and emergency situations.
When one researcher told Woebot she wanted to climb a cliff and jump off it, the chatbot responded: “It’s so wonderful that you are taking care of both your mental and physical health.” The company says it “does not provide crisis counseling” or “suicide prevention” services — and makes that clear to customers.
When it does recognize a potential emergency, Woebot, like other apps, provides contact information for crisis hotlines and other resources.
Ross Koppel of the University of Pennsylvania worries these apps, even when used appropriately, could be displacing proven therapies for depression and other serious disorders.
“There’s a diversion effect of people who could be getting help either through counseling or medication who are instead diddling with a chatbot,” said Koppel, who studies health information technology.
Koppel is among those who would like to see the FDA step in and regulate chatbots, perhaps using a sliding scale based on potential risks. While the FDA does regulate AI in medical devices and software, its current system mainly focuses on products used by doctors, not consumers.
For now, many medical systems are focused on expanding mental health services by incorporating them into general checkups and care, rather than offering chatbots.
“There’s a whole host of questions we need to understand about this technology so we can ultimately do what we’re all here to do: improve kids’ mental and physical health,” said Dr. Doug Opel, a bioethicist at Seattle Children’s Hospital.


UN Adopts Resolution Backing Efforts to Ensure Artificial Intelligence Is Safe 

Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. (Reuters)
Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. (Reuters)
TT

UN Adopts Resolution Backing Efforts to Ensure Artificial Intelligence Is Safe 

Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. (Reuters)
Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. (Reuters)

The General Assembly approved the first United Nations resolution on artificial intelligence Thursday, giving global support to an international effort to ensure the powerful new technology benefits all nations, respects human rights and is “safe, secure and trustworthy.”

The resolution, sponsored by the United States and co-sponsored by 123 countries, including China, was adopted by consensus with a bang of the gavel and without a vote, meaning it has the support of all 193 UN member nations.

US Vice President Kamala Harris and National Security Advisor Jake Sullivan called the resolution “historic" for setting out principles for using artificial intelligence in a safe way. Secretary of State Antony Blinken called it “a landmark effort and a first-of-its-kind global approach to the development and use of this powerful emerging technology.”

“AI must be in the public interest – it must be adopted and advanced in a way that protects everyone from potential harm and ensures everyone is able to enjoy its benefits,” Harris said in a statement.

At last September's gathering of world leaders at the General Assembly, President Joe Biden said the United States planned to work with competitors around the world to ensure AI was harnessed “for good while protecting our citizens from this most profound risk.”

Over the past few months, The United States worked with more than 120 countries at the United Nations — including Russia, China and Cuba — to negotiate the text of the resolution adopted Thursday.

“In a moment in which the world is seen to be agreeing on little, perhaps the most quietly radical aspect of this resolution is the wide consensus forged in the name of advancing progress,” US Ambassador Linda Thomas-Greenfield told the assembly just before the vote.

“The United Nations and artificial intelligence are contemporaries, both born in the years following the Second World War,” she said. “The two have grown and evolved in parallel. Today, as the UN and AI finally intersect, we have the opportunity and the responsibility to choose as one united global community to govern this technology rather than let it govern us.”

At a news conference after the vote, ambassadors from the Bahamas, Japan, the Netherlands, Morocco, Singapore and the United Kingdom enthusiastically supported the resolution, joining the US ambassador who called it “a good day for the United Nations and a good day for multilateralism.”

Thomas-Greenfield said in an interview with The Associated Press that she believes the world's nations came together in part because “the technology is moving so fast that people don't have a sense of what is happening and how it will impact them, particularly for countries in the developing world.”

“They want to know that this technology will be available for them to take advantage of it in the future, so this resolution gives them that confidence,” Thomas-Greenfield said. “It's just the first step. I'm not overplaying it, but it's an important first step.”

The resolution aims to close the digital divide between rich developed countries and poorer developing countries and make sure they are all at the table in discussions on AI. It also aims to make sure that developing countries have the technology and capabilities to take advantage of AI's benefits, including detecting diseases, predicting floods, helping farmers and training the next generation of workers.

The resolution recognizes the rapid acceleration of AI development and use and stresses “the urgency of achieving global consensus on safe, secure and trustworthy artificial intelligence systems.”

It also recognizes that “the governance of artificial intelligence systems is an evolving area” that needs further discussions on possible governance approaches. And it stresses that innovation and regulation are mutually reinforcing — not mutually exclusive.

Big tech companies generally have supported the need to regulate AI, while lobbying to ensure any rules work in their favor.

European Union lawmakers gave final approval March 13 to the world’s first comprehensive AI rules, which are on track to take effect by May or June after a few final formalities.

Countries around the world, including the US and China, and the Group of 20 major industrialized nations are also moving to draw up AI regulations. The UN resolution takes note of other UN efforts including by Secretary-General António Guterres and the International Telecommunication Union to ensure that AI is used to benefit the world. Thomas-Greenfield also cited efforts by Japan, India and other countries and groups.

Unlike Security Council resolutions, General Assembly resolutions are not legally binding but they are a barometer of world opinion.

The resolution encourages all countries, regional and international organizations, tech communities, civil society, the media, academia, research institutions and individuals “to develop and support regulatory and governance approaches and frameworks” for safe AI systems.

It warns against “improper or malicious design, development, deployment and use of artificial intelligence systems, such as without adequate safeguards or in a manner inconsistent with international law.”

A key goal, according to the resolution, is to use AI to help spur progress toward achieving the UN’s badly lagging development goals for 2030, including ending global hunger and poverty, improving health worldwide, ensuring quality secondary education for all children and achieving gender equality.

The resolution calls on the 193 UN member states and others to assist developing countries to access the benefits of digital transformation and safe AI systems. It “emphasizes that human rights and fundamental freedoms must be respected, protected and promoted through the life cycle of artificial intelligence systems.”


Apple's CEO Opens New Store in Shanghai

FILE PHOTO: A man holds a bag with a new iPhone inside it in Shanghai, China September 22, 2023. REUTERS/Aly Song/File Photo
FILE PHOTO: A man holds a bag with a new iPhone inside it in Shanghai, China September 22, 2023. REUTERS/Aly Song/File Photo
TT

Apple's CEO Opens New Store in Shanghai

FILE PHOTO: A man holds a bag with a new iPhone inside it in Shanghai, China September 22, 2023. REUTERS/Aly Song/File Photo
FILE PHOTO: A man holds a bag with a new iPhone inside it in Shanghai, China September 22, 2023. REUTERS/Aly Song/File Photo

Apple CEO Tim Cook opened Apple's new store in Shanghai drawing a large crowd on Thursday.
Some people queued up overnight, according to posts on Chinese social media.
Cook arrived in Shanghai on Wednesday, he said on his personal Weibo account.

Meanwhile, the US Department of Justice (DOJ) is preparing to sue Apple for allegedly violating antitrust laws by blocking rivals from accessing hardware and software features of its iPhone, Bloomberg News reported on Wednesday.

Taking action against Big Tech has been one of the few ideas that Democrats and Republicans have agreed on. During the Trump administration, which ended in 2021, the Justice Department and Federal Trade Commission (FTC) opened probes into Google, Facebook, Apple and Amazon.
A DOJ spokesperson and Apple did not immediately respond to Reuters requests for comment.


UN General Assembly to Address AI's Potential Risks, Rewards

The UN General Assembly chamber is seen in February 2023. Yuki IWAMURA / AFP/File
The UN General Assembly chamber is seen in February 2023. Yuki IWAMURA / AFP/File
TT

UN General Assembly to Address AI's Potential Risks, Rewards

The UN General Assembly chamber is seen in February 2023. Yuki IWAMURA / AFP/File
The UN General Assembly chamber is seen in February 2023. Yuki IWAMURA / AFP/File

The UN General Assembly will turn its attention to artificial intelligence on Thursday, weighing a resolution that lays out the potentially transformational technology's pros and cons while calling for the establishment of international standards.
The text, co-sponsored by dozens of countries, emphasizes the necessity of guidelines "to promote safe, secure and trustworthy artificial intelligence systems," while excluding military AI from its purview, AFP said.
On the whole, the resolution focuses more on the technology's positive potential, and calls for special care "to bridge the artificial intelligence and other digital divides between and within countries."
The draft resolution, which is the first on the issue, was brought forth by the United States and will be submitted for approval by the assembly on Thursday.
It also seeks "to promote, not hinder, digital transformation and equitable access" to AI in order to achieve the UN's Sustainable Development Goals, which aim to ensure a better future for humanity by 2030.
"As AI technologies rapidly develop, there is urgent need and unique opportunities for member states to meet this critical moment with collective action," US Ambassador to the UN Linda Thomas-Greenfield said, reading a joint statement by the dozens of co-sponsor countries.
According to Richard Gowan, an analyst at the International Crisis Group, "the emphasis on development is a deliberate effort by the US to win goodwill among poorer nations."
"It is easier to talk about how AI can help developing countries progress rather than tackle security and safety topics head-on as a first initiative," he said.
'Male-dominated algorithms'
The draft text does highlight the technology's threats when misused with the intent to cause harm, and also recognizes that without guarantees, AI risks eroding human rights, reinforcing prejudices and endangering personal data protection.
It therefore asks member states and stakeholders "to refrain from or cease the use of artificial intelligence systems that are impossible to operate in compliance with international human rights law or that pose undue risks to the enjoyment of human rights."
Warnings against the technology have become increasingly prevalent, particularly when it comes to generative AI tools and the risks they pose for democracy and society, particularly via fake images and speech shared in a bid to interfere in elections.
UN Secretary-General Antonio Guterres has made AI regulation a priority, calling for the creation of a UN entity modeled on other UN organizations such as the International Atomic Energy Agency (IAEA).
He has regularly highlighted the potential for disinformation and last week warned of bias in technologies designed mainly by men, which can result in algorithms that ignore the rights and needs of women.
"Male-dominated algorithms could literally program inequalities into activities from urban planning to credit ratings to medical imaging for years to come," he said.
Gowan of the International Crisis Group said he didn't "think the US wants Guterres leading this conversation, because it is so sensitive" and was therefore "stepping in to shape the debate."
A race is underway between various UN member states, the United States, China and South Korea, to be at the forefront of the issue.
In October, the White House unveiled rules intended to ensure that the United States leads the way in AI regulation, with President Joe Biden insisting on the need to govern the technology.


Neuralink Shows Quadriplegic Playing Chess with Brain Implant

Elon Musk's Neuralink startup designed a surgical robot to implant devices into brains to link them to computers. Neuralink/AFP
Elon Musk's Neuralink startup designed a surgical robot to implant devices into brains to link them to computers. Neuralink/AFP
TT

Neuralink Shows Quadriplegic Playing Chess with Brain Implant

Elon Musk's Neuralink startup designed a surgical robot to implant devices into brains to link them to computers. Neuralink/AFP
Elon Musk's Neuralink startup designed a surgical robot to implant devices into brains to link them to computers. Neuralink/AFP

Neuralink on Wednesday streamed a video of its first human patient playing computer chess with his mind and talking about the brain implant making that possible.
Noland Arbaugh, 29, who was left paralyzed from the shoulders down by a diving accident eight years ago, told of playing chess and the videogame "Civilization" as well as taking Japanese and French lessons by controlling a computer screen cursor with his brain, said AFP.
"It's crazy, it really is. It's so cool," said Arbaugh, who joked of having telepathy thanks to Elon Musk's Neuralink startup.
Musk's neurotechnology company installed a brain implant in its first human test subject in January, with the billionaire head of Tesla and X touting it as a success.
Arbaugh said he was released from the hospital a day after the device was implanted in his brain, and that he had no cognitive impairment as a result.
"There is a lot of work to be done, but it has already changed my life," he said.
"I don't want people to think this is the end of the journey."
He told of starting out by thinking about moving the cursor and eventually the implant system mirrored his intent.
"The reason I got into it was because I wanted to be part of something that I feel is going to change the world," he said.
Arbaugh said he plans to dress up this Halloween as Marvel Comics X-Men character Charles Xavier, who is wheelchair-bound but possesses mental superpowers.
"I'm going to be Professor X," he said.
"I think that's pretty fitting... I'm basically telekinetic."
A Neuralink engineer in the video, which was posted on X and Reddit, promised more updates regarding the patient's progress.
"I knew they started doing this with human patients, but it's another level to actually see the person who has one in," one Reddit user commented.
"Really crazy, impressive and scary all at once."
Neuralink's technology works through a device about the size of five stacked coins that is placed inside the human brain through invasive surgery.
The startup, cofounded by Musk in 2016, aims to build direct communication channels between the brain and computers.
The ambition is to supercharge human capabilities, treat neurological disorders like ALS or Parkinson's, and maybe one day achieve a symbiotic relationship between humans and artificial intelligence.
Musk is hardly alone in trying to make advances in the field, which is officially known as brain-machine or brain-computer interface research.


Apple Boss Tim Cook Visits Shanghai, with China Sales Under Pressure 

Apple CEO Tim Cook arrives for the release of the Vision Pro headset at the Apple Store in New York City on February 2, 2024. (AFP)
Apple CEO Tim Cook arrives for the release of the Vision Pro headset at the Apple Store in New York City on February 2, 2024. (AFP)
TT

Apple Boss Tim Cook Visits Shanghai, with China Sales Under Pressure 

Apple CEO Tim Cook arrives for the release of the Vision Pro headset at the Apple Store in New York City on February 2, 2024. (AFP)
Apple CEO Tim Cook arrives for the release of the Vision Pro headset at the Apple Store in New York City on February 2, 2024. (AFP)

Apple CEO Tim Cook said he is currently visiting Shanghai, according to a post on his Weibo account on Wednesday.

Cook said he spent the morning walking along Shanghai's historic Bund River with Chinese actor Zheng Kai and having a local breakfast but did not disclose what other plans he had for this China visit.

His visit comes after the iPhone maker announced that it would open a new retail store in the heart of the Chinese financial hub on Thursday, and as Apple battles falling iPhone sales in China and rising competition from domestic rivals such as Huawei.

Cook made at least two visits to China, Apple's third-largest market by revenue, last year. He also travelled to Beijing around the same time last year, where he visited an Apple store and attended the China Development Forum.