BlackBerry Reports Near 6% Rise in Quarterly Revenue

BlackBerry Reports Near 6% Rise in Quarterly Revenue
TT

BlackBerry Reports Near 6% Rise in Quarterly Revenue

BlackBerry Reports Near 6% Rise in Quarterly Revenue

Canada's BlackBerry Ltd reported a near 6% rise in quarterly revenue on Thursday, as demand for its security software suite, Spark, and its QNX car software rose.

Total revenue for the second quarter ended Aug. 31 was $259 million, higher than analysts' estimates of $237.6 million, according IBES data from Refinitiv.

Net loss narrowed to $23 million, or 4 cents per share, from $44 million, or 10 cents per share, a year earlier.



US Requiring New AI Safeguards for Government Use, Transparency

An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song/File Photo
An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song/File Photo
TT

US Requiring New AI Safeguards for Government Use, Transparency

An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song/File Photo
An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song/File Photo

The White House said Thursday it is requiring federal agencies using artificial intelligence to adopt "concrete safeguards" by Dec. 1 to protect Americans’ rights and ensure safety as the government expands AI use in a wide range of applications.
The Office of Management and Budget issued a directive to federal agencies to monitor, assess and test AI’s impacts "on the public, mitigate the risks of algorithmic discrimination, and provide the public with transparency into how the government uses AI." Agencies must also conduct risk assessments and set operational and governance metrics, Reuters said.
The White House said agencies "will be required to implement concrete safeguards when using AI in a way that could impact Americans' rights or safety" including detailed public disclosures so the public knows how and when artificial intelligence is being used by the government.
President Joe Biden signed an executive order in October invoking the Defense Production Act to require developers of AI systems posing risks to US national security, the economy, public health or safety to share the results of safety tests with the US government before publicly released.
The White House on Thursday said new safeguards will ensure air travelers can opt out from Transportation Security Administration facial recognition use without delay in screening. When AI is used in federal healthcare to support diagnostics decisions a human must oversee "the process to verify the tools’ results."
Generative AI - which can create text, photos and videos in response to open-ended prompts - has spurred excitement as well as fears it could lead to job losses, upend elections and potentially overpower humans and catastrophic effects.
The White House is requiring government agencies to release inventories of AI use cases, report metrics about AI use and release government-owned AI code, models, and data if it does not pose risks.
The Biden administration cited ongoing federal AI uses, including the Federal Emergency Management Agency employing AI to assess structural hurricane damage, while the Centers for Disease Control and Prevention uses AI to predict spread of disease and detect opioid use. The Federal Aviation Administration is using AI to help "deconflict air traffic in major metropolitan areas to improve travel time."
The White House plans to hire 100 AI professionals to promote the safe use of AI and is requiring federal agencies to designate chief AI officers within 60 days.
In January, the Biden administration proposed requiring US cloud companies to determine whether foreign entities are accessing US data centers to train AI models through "know your customer" rules.


Apple Announces Worldwide Developers Conference Dates, In-Person Event

 The Apple logo hangs in front of an Apple store on March 21, 2024 in Chicago, Illinois. (Getty Images/AFP)
The Apple logo hangs in front of an Apple store on March 21, 2024 in Chicago, Illinois. (Getty Images/AFP)
TT

Apple Announces Worldwide Developers Conference Dates, In-Person Event

 The Apple logo hangs in front of an Apple store on March 21, 2024 in Chicago, Illinois. (Getty Images/AFP)
The Apple logo hangs in front of an Apple store on March 21, 2024 in Chicago, Illinois. (Getty Images/AFP)

Apple has announced their annual developers conference will take place June 10 through June 14.

The big summer event will be live-streamed, but some select developers have been invited to attend in-person events at Apple's campus in Cupertino, California, on June 10.

The company typically showcases their latest software and product updates — including the iPhone, iPad, Apple Watch, AppleTV and Vision Pro headset — during a keynote address on the first day.

Contributing to a drop in Apple’s stock price this year is concern it lags behind Microsoft and Google in the push to develop products powered by artificial intelligence technology.

While Apple tends to keep its product development close to the vest, CEO Tim Cook signaled at the company’s annual shareholder meeting in February that it has been making big investments in generative AI and plans to disclose more later this year.

The week-long conference will have opportunities for developers to connect with Apple designers and engineers to gain insight into new tools, frameworks and features, according to the company's announcement.


EU to Investigate Apple, Google, Meta for Potential Digital Markets Act Breaches

FILE - The Apple logo is illuminated at a store in Munich, Germany, Monday, Nov. 13, 2023.  (AP Photo/Matthias Schrader, File)
FILE - The Apple logo is illuminated at a store in Munich, Germany, Monday, Nov. 13, 2023. (AP Photo/Matthias Schrader, File)
TT

EU to Investigate Apple, Google, Meta for Potential Digital Markets Act Breaches

FILE - The Apple logo is illuminated at a store in Munich, Germany, Monday, Nov. 13, 2023.  (AP Photo/Matthias Schrader, File)
FILE - The Apple logo is illuminated at a store in Munich, Germany, Monday, Nov. 13, 2023. (AP Photo/Matthias Schrader, File)

EU antitrust regulators on Monday opened their first investigations under the Digital Markets Act into Apple, Alphabet's Google and Meta Platforms for potential breaches of the landmark EU tech rules.

"The (European) Commission suspects that the measures put in place by these gatekeepers fall short of effective compliance of their obligations under the DMA," the EU executive said in a statement.

The EU competition enforcer will investigate Alphabet's rules on steering in Google Play and self-preferencing on Google Search, Apple's rules on steering in the App Store and the choice screen for Safari and Meta's 'pay or consent model'.

The Commission also launched investigatory steps relating to Apple's new fee structure for alternative app stores and Amazon's ranking practices on its marketplace.


Apple Vision Pro to Hit Mainland China this Year

Apple CEO Tim Cook speaks during a parallel session of the China Development Forum at the Diaoyutai State Guesthouse in Beijing, China, on Sunday, March 24, 2024. (AP Photo/Tatan Syuflana)
Apple CEO Tim Cook speaks during a parallel session of the China Development Forum at the Diaoyutai State Guesthouse in Beijing, China, on Sunday, March 24, 2024. (AP Photo/Tatan Syuflana)
TT

Apple Vision Pro to Hit Mainland China this Year

Apple CEO Tim Cook speaks during a parallel session of the China Development Forum at the Diaoyutai State Guesthouse in Beijing, China, on Sunday, March 24, 2024. (AP Photo/Tatan Syuflana)
Apple CEO Tim Cook speaks during a parallel session of the China Development Forum at the Diaoyutai State Guesthouse in Beijing, China, on Sunday, March 24, 2024. (AP Photo/Tatan Syuflana)

Apple Vision Pro will hit the mainland China market this year, Apple chief executive Tim Cook said on Sunday, according to state media.

Cook revealed the headset's China launch plan in response to a media question on the sidelines of the China Development Forum in Beijing, CCTV finance said on its Weibo social account.

Apple will continue to ramp up research and development investment in China, he was quoted as saying.


AI Chatbots are Here to Help with Your Mental Health, despite Limited Evidence they Work

Representation photo: The word Pegasus and binary code are displayed on a smartphone which is placed on a keyboard in this illustration taken May 4, 2022. (Reuters)
Representation photo: The word Pegasus and binary code are displayed on a smartphone which is placed on a keyboard in this illustration taken May 4, 2022. (Reuters)
TT

AI Chatbots are Here to Help with Your Mental Health, despite Limited Evidence they Work

Representation photo: The word Pegasus and binary code are displayed on a smartphone which is placed on a keyboard in this illustration taken May 4, 2022. (Reuters)
Representation photo: The word Pegasus and binary code are displayed on a smartphone which is placed on a keyboard in this illustration taken May 4, 2022. (Reuters)

Download the mental health chatbot Earkick and you’re greeted by a bandana-wearing panda who could easily fit into a kids' cartoon.
Start talking or typing about anxiety and the app generates the kind of comforting, sympathetic statements therapists are trained to deliver. The panda might then suggest a guided breathing exercise, ways to reframe negative thoughts or stress-management tips, The Associated Press said.
It's all part of a well-established approach used by therapists, but please don’t call it therapy, says Earkick co-founder Karin Andrea Stephan.
“When people call us a form of therapy, that’s OK, but we don’t want to go out there and tout it,” says Stephan, a former professional musician and self-described serial entrepreneur. “We just don’t feel comfortable with that.”
The question of whether these artificial intelligence -based chatbots are delivering a mental health service or are simply a new form of self-help is critical to the emerging digital health industry — and its survival.
Earkick is one of hundreds of free apps that are being pitched to address a crisis in mental health among teens and young adults. Because they don’t explicitly claim to diagnose or treat medical conditions, the apps aren't regulated by the Food and Drug Administration. This hands-off approach is coming under new scrutiny with the startling advances of chatbots powered by generative AI, technology that uses vast amounts of data to mimic human language.
The industry argument is simple: Chatbots are free, available 24/7 and don’t come with the stigma that keeps some people away from therapy.
But there’s limited data that they actually improve mental health. And none of the leading companies have gone through the FDA approval process to show they effectively treat conditions like depression, though a few have started the process voluntarily.
“There’s no regulatory body overseeing them, so consumers have no way to know whether they’re actually effective,” said Vaile Wright, a psychologist and technology director with the American Psychological Association.
Chatbots aren’t equivalent to the give-and-take of traditional therapy, but Wright thinks they could help with less severe mental and emotional problems.
Earkick’s website states that the app does not “provide any form of medical care, medical opinion, diagnosis or treatment.”
Some health lawyers say such disclaimers aren’t enough.
“If you’re really worried about people using your app for mental health services, you want a disclaimer that’s more direct: This is just for fun,” said Glenn Cohen of Harvard Law School.
Still, chatbots are already playing a role due to an ongoing shortage of mental health professionals.
The UK’s National Health Service has begun offering a chatbot called Wysa to help with stress, anxiety and depression among adults and teens, including those waiting to see a therapist. Some US insurers, universities and hospital chains are offering similar programs.
Dr. Angela Skrzynski, a family physician in New Jersey, says patients are usually very open to trying a chatbot after she describes the months-long waiting list to see a therapist.
Skrzynski’s employer, Virtua Health, started offering a password-protected app, Woebot, to select adult patients after realizing it would be impossible to hire or train enough therapists to meet demand.
“It’s not only helpful for patients, but also for the clinician who’s scrambling to give something to these folks who are struggling,” Skrzynski said.
Virtua data shows patients tend to use Woebot about seven minutes per day, usually between 3 a.m. and 5 a.m.
Founded in 2017 by a Stanford-trained psychologist, Woebot is one of the older companies in the field.
Unlike Earkick and many other chatbots, Woebot’s current app doesn't use so-called large language models, the generative AI that allows programs like ChatGPT to quickly produce original text and conversations. Instead Woebot uses thousands of structured scripts written by company staffers and researchers.
Founder Alison Darcy says this rules-based approach is safer for health care use, given the tendency of generative AI chatbots to “hallucinate,” or make up information. Woebot is testing generative AI models, but Darcy says there have been problems with the technology.
“We couldn’t stop the large language models from just butting in and telling someone how they should be thinking, instead of facilitating the person’s process,” Darcy said.
Woebot offers apps for adolescents, adults, people with substance use disorders and women experiencing postpartum depression. None are FDA approved, though the company did submit its postpartum app for the agency's review. The company says it has “paused” that effort to focus on other areas.
Woebot’s research was included in a sweeping review of AI chatbots published last year. Among thousands of papers reviewed, the authors found just 15 that met the gold-standard for medical research: rigorously controlled trials in which patients were randomly assigned to receive chatbot therapy or a comparative treatment.
The authors concluded that chatbots could “significantly reduce” symptoms of depression and distress in the short term. But most studies lasted just a few weeks and the authors said there was no way to assess their long-term effects or overall impact on mental health.
Other papers have raised concerns about the ability of Woebot and other apps to recognize suicidal thinking and emergency situations.
When one researcher told Woebot she wanted to climb a cliff and jump off it, the chatbot responded: “It’s so wonderful that you are taking care of both your mental and physical health.” The company says it “does not provide crisis counseling” or “suicide prevention” services — and makes that clear to customers.
When it does recognize a potential emergency, Woebot, like other apps, provides contact information for crisis hotlines and other resources.
Ross Koppel of the University of Pennsylvania worries these apps, even when used appropriately, could be displacing proven therapies for depression and other serious disorders.
“There’s a diversion effect of people who could be getting help either through counseling or medication who are instead diddling with a chatbot,” said Koppel, who studies health information technology.
Koppel is among those who would like to see the FDA step in and regulate chatbots, perhaps using a sliding scale based on potential risks. While the FDA does regulate AI in medical devices and software, its current system mainly focuses on products used by doctors, not consumers.
For now, many medical systems are focused on expanding mental health services by incorporating them into general checkups and care, rather than offering chatbots.
“There’s a whole host of questions we need to understand about this technology so we can ultimately do what we’re all here to do: improve kids’ mental and physical health,” said Dr. Doug Opel, a bioethicist at Seattle Children’s Hospital.


UN Adopts Resolution Backing Efforts to Ensure Artificial Intelligence Is Safe 

Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. (Reuters)
Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. (Reuters)
TT

UN Adopts Resolution Backing Efforts to Ensure Artificial Intelligence Is Safe 

Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. (Reuters)
Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. (Reuters)

The General Assembly approved the first United Nations resolution on artificial intelligence Thursday, giving global support to an international effort to ensure the powerful new technology benefits all nations, respects human rights and is “safe, secure and trustworthy.”

The resolution, sponsored by the United States and co-sponsored by 123 countries, including China, was adopted by consensus with a bang of the gavel and without a vote, meaning it has the support of all 193 UN member nations.

US Vice President Kamala Harris and National Security Advisor Jake Sullivan called the resolution “historic" for setting out principles for using artificial intelligence in a safe way. Secretary of State Antony Blinken called it “a landmark effort and a first-of-its-kind global approach to the development and use of this powerful emerging technology.”

“AI must be in the public interest – it must be adopted and advanced in a way that protects everyone from potential harm and ensures everyone is able to enjoy its benefits,” Harris said in a statement.

At last September's gathering of world leaders at the General Assembly, President Joe Biden said the United States planned to work with competitors around the world to ensure AI was harnessed “for good while protecting our citizens from this most profound risk.”

Over the past few months, The United States worked with more than 120 countries at the United Nations — including Russia, China and Cuba — to negotiate the text of the resolution adopted Thursday.

“In a moment in which the world is seen to be agreeing on little, perhaps the most quietly radical aspect of this resolution is the wide consensus forged in the name of advancing progress,” US Ambassador Linda Thomas-Greenfield told the assembly just before the vote.

“The United Nations and artificial intelligence are contemporaries, both born in the years following the Second World War,” she said. “The two have grown and evolved in parallel. Today, as the UN and AI finally intersect, we have the opportunity and the responsibility to choose as one united global community to govern this technology rather than let it govern us.”

At a news conference after the vote, ambassadors from the Bahamas, Japan, the Netherlands, Morocco, Singapore and the United Kingdom enthusiastically supported the resolution, joining the US ambassador who called it “a good day for the United Nations and a good day for multilateralism.”

Thomas-Greenfield said in an interview with The Associated Press that she believes the world's nations came together in part because “the technology is moving so fast that people don't have a sense of what is happening and how it will impact them, particularly for countries in the developing world.”

“They want to know that this technology will be available for them to take advantage of it in the future, so this resolution gives them that confidence,” Thomas-Greenfield said. “It's just the first step. I'm not overplaying it, but it's an important first step.”

The resolution aims to close the digital divide between rich developed countries and poorer developing countries and make sure they are all at the table in discussions on AI. It also aims to make sure that developing countries have the technology and capabilities to take advantage of AI's benefits, including detecting diseases, predicting floods, helping farmers and training the next generation of workers.

The resolution recognizes the rapid acceleration of AI development and use and stresses “the urgency of achieving global consensus on safe, secure and trustworthy artificial intelligence systems.”

It also recognizes that “the governance of artificial intelligence systems is an evolving area” that needs further discussions on possible governance approaches. And it stresses that innovation and regulation are mutually reinforcing — not mutually exclusive.

Big tech companies generally have supported the need to regulate AI, while lobbying to ensure any rules work in their favor.

European Union lawmakers gave final approval March 13 to the world’s first comprehensive AI rules, which are on track to take effect by May or June after a few final formalities.

Countries around the world, including the US and China, and the Group of 20 major industrialized nations are also moving to draw up AI regulations. The UN resolution takes note of other UN efforts including by Secretary-General António Guterres and the International Telecommunication Union to ensure that AI is used to benefit the world. Thomas-Greenfield also cited efforts by Japan, India and other countries and groups.

Unlike Security Council resolutions, General Assembly resolutions are not legally binding but they are a barometer of world opinion.

The resolution encourages all countries, regional and international organizations, tech communities, civil society, the media, academia, research institutions and individuals “to develop and support regulatory and governance approaches and frameworks” for safe AI systems.

It warns against “improper or malicious design, development, deployment and use of artificial intelligence systems, such as without adequate safeguards or in a manner inconsistent with international law.”

A key goal, according to the resolution, is to use AI to help spur progress toward achieving the UN’s badly lagging development goals for 2030, including ending global hunger and poverty, improving health worldwide, ensuring quality secondary education for all children and achieving gender equality.

The resolution calls on the 193 UN member states and others to assist developing countries to access the benefits of digital transformation and safe AI systems. It “emphasizes that human rights and fundamental freedoms must be respected, protected and promoted through the life cycle of artificial intelligence systems.”


Apple's CEO Opens New Store in Shanghai

FILE PHOTO: A man holds a bag with a new iPhone inside it in Shanghai, China September 22, 2023. REUTERS/Aly Song/File Photo
FILE PHOTO: A man holds a bag with a new iPhone inside it in Shanghai, China September 22, 2023. REUTERS/Aly Song/File Photo
TT

Apple's CEO Opens New Store in Shanghai

FILE PHOTO: A man holds a bag with a new iPhone inside it in Shanghai, China September 22, 2023. REUTERS/Aly Song/File Photo
FILE PHOTO: A man holds a bag with a new iPhone inside it in Shanghai, China September 22, 2023. REUTERS/Aly Song/File Photo

Apple CEO Tim Cook opened Apple's new store in Shanghai drawing a large crowd on Thursday.
Some people queued up overnight, according to posts on Chinese social media.
Cook arrived in Shanghai on Wednesday, he said on his personal Weibo account.

Meanwhile, the US Department of Justice (DOJ) is preparing to sue Apple for allegedly violating antitrust laws by blocking rivals from accessing hardware and software features of its iPhone, Bloomberg News reported on Wednesday.

Taking action against Big Tech has been one of the few ideas that Democrats and Republicans have agreed on. During the Trump administration, which ended in 2021, the Justice Department and Federal Trade Commission (FTC) opened probes into Google, Facebook, Apple and Amazon.
A DOJ spokesperson and Apple did not immediately respond to Reuters requests for comment.


UN General Assembly to Address AI's Potential Risks, Rewards

The UN General Assembly chamber is seen in February 2023. Yuki IWAMURA / AFP/File
The UN General Assembly chamber is seen in February 2023. Yuki IWAMURA / AFP/File
TT

UN General Assembly to Address AI's Potential Risks, Rewards

The UN General Assembly chamber is seen in February 2023. Yuki IWAMURA / AFP/File
The UN General Assembly chamber is seen in February 2023. Yuki IWAMURA / AFP/File

The UN General Assembly will turn its attention to artificial intelligence on Thursday, weighing a resolution that lays out the potentially transformational technology's pros and cons while calling for the establishment of international standards.
The text, co-sponsored by dozens of countries, emphasizes the necessity of guidelines "to promote safe, secure and trustworthy artificial intelligence systems," while excluding military AI from its purview, AFP said.
On the whole, the resolution focuses more on the technology's positive potential, and calls for special care "to bridge the artificial intelligence and other digital divides between and within countries."
The draft resolution, which is the first on the issue, was brought forth by the United States and will be submitted for approval by the assembly on Thursday.
It also seeks "to promote, not hinder, digital transformation and equitable access" to AI in order to achieve the UN's Sustainable Development Goals, which aim to ensure a better future for humanity by 2030.
"As AI technologies rapidly develop, there is urgent need and unique opportunities for member states to meet this critical moment with collective action," US Ambassador to the UN Linda Thomas-Greenfield said, reading a joint statement by the dozens of co-sponsor countries.
According to Richard Gowan, an analyst at the International Crisis Group, "the emphasis on development is a deliberate effort by the US to win goodwill among poorer nations."
"It is easier to talk about how AI can help developing countries progress rather than tackle security and safety topics head-on as a first initiative," he said.
'Male-dominated algorithms'
The draft text does highlight the technology's threats when misused with the intent to cause harm, and also recognizes that without guarantees, AI risks eroding human rights, reinforcing prejudices and endangering personal data protection.
It therefore asks member states and stakeholders "to refrain from or cease the use of artificial intelligence systems that are impossible to operate in compliance with international human rights law or that pose undue risks to the enjoyment of human rights."
Warnings against the technology have become increasingly prevalent, particularly when it comes to generative AI tools and the risks they pose for democracy and society, particularly via fake images and speech shared in a bid to interfere in elections.
UN Secretary-General Antonio Guterres has made AI regulation a priority, calling for the creation of a UN entity modeled on other UN organizations such as the International Atomic Energy Agency (IAEA).
He has regularly highlighted the potential for disinformation and last week warned of bias in technologies designed mainly by men, which can result in algorithms that ignore the rights and needs of women.
"Male-dominated algorithms could literally program inequalities into activities from urban planning to credit ratings to medical imaging for years to come," he said.
Gowan of the International Crisis Group said he didn't "think the US wants Guterres leading this conversation, because it is so sensitive" and was therefore "stepping in to shape the debate."
A race is underway between various UN member states, the United States, China and South Korea, to be at the forefront of the issue.
In October, the White House unveiled rules intended to ensure that the United States leads the way in AI regulation, with President Joe Biden insisting on the need to govern the technology.


Neuralink Shows Quadriplegic Playing Chess with Brain Implant

Elon Musk's Neuralink startup designed a surgical robot to implant devices into brains to link them to computers. Neuralink/AFP
Elon Musk's Neuralink startup designed a surgical robot to implant devices into brains to link them to computers. Neuralink/AFP
TT

Neuralink Shows Quadriplegic Playing Chess with Brain Implant

Elon Musk's Neuralink startup designed a surgical robot to implant devices into brains to link them to computers. Neuralink/AFP
Elon Musk's Neuralink startup designed a surgical robot to implant devices into brains to link them to computers. Neuralink/AFP

Neuralink on Wednesday streamed a video of its first human patient playing computer chess with his mind and talking about the brain implant making that possible.
Noland Arbaugh, 29, who was left paralyzed from the shoulders down by a diving accident eight years ago, told of playing chess and the videogame "Civilization" as well as taking Japanese and French lessons by controlling a computer screen cursor with his brain, said AFP.
"It's crazy, it really is. It's so cool," said Arbaugh, who joked of having telepathy thanks to Elon Musk's Neuralink startup.
Musk's neurotechnology company installed a brain implant in its first human test subject in January, with the billionaire head of Tesla and X touting it as a success.
Arbaugh said he was released from the hospital a day after the device was implanted in his brain, and that he had no cognitive impairment as a result.
"There is a lot of work to be done, but it has already changed my life," he said.
"I don't want people to think this is the end of the journey."
He told of starting out by thinking about moving the cursor and eventually the implant system mirrored his intent.
"The reason I got into it was because I wanted to be part of something that I feel is going to change the world," he said.
Arbaugh said he plans to dress up this Halloween as Marvel Comics X-Men character Charles Xavier, who is wheelchair-bound but possesses mental superpowers.
"I'm going to be Professor X," he said.
"I think that's pretty fitting... I'm basically telekinetic."
A Neuralink engineer in the video, which was posted on X and Reddit, promised more updates regarding the patient's progress.
"I knew they started doing this with human patients, but it's another level to actually see the person who has one in," one Reddit user commented.
"Really crazy, impressive and scary all at once."
Neuralink's technology works through a device about the size of five stacked coins that is placed inside the human brain through invasive surgery.
The startup, cofounded by Musk in 2016, aims to build direct communication channels between the brain and computers.
The ambition is to supercharge human capabilities, treat neurological disorders like ALS or Parkinson's, and maybe one day achieve a symbiotic relationship between humans and artificial intelligence.
Musk is hardly alone in trying to make advances in the field, which is officially known as brain-machine or brain-computer interface research.


Apple Boss Tim Cook Visits Shanghai, with China Sales Under Pressure 

Apple CEO Tim Cook arrives for the release of the Vision Pro headset at the Apple Store in New York City on February 2, 2024. (AFP)
Apple CEO Tim Cook arrives for the release of the Vision Pro headset at the Apple Store in New York City on February 2, 2024. (AFP)
TT

Apple Boss Tim Cook Visits Shanghai, with China Sales Under Pressure 

Apple CEO Tim Cook arrives for the release of the Vision Pro headset at the Apple Store in New York City on February 2, 2024. (AFP)
Apple CEO Tim Cook arrives for the release of the Vision Pro headset at the Apple Store in New York City on February 2, 2024. (AFP)

Apple CEO Tim Cook said he is currently visiting Shanghai, according to a post on his Weibo account on Wednesday.

Cook said he spent the morning walking along Shanghai's historic Bund River with Chinese actor Zheng Kai and having a local breakfast but did not disclose what other plans he had for this China visit.

His visit comes after the iPhone maker announced that it would open a new retail store in the heart of the Chinese financial hub on Thursday, and as Apple battles falling iPhone sales in China and rising competition from domestic rivals such as Huawei.

Cook made at least two visits to China, Apple's third-largest market by revenue, last year. He also travelled to Beijing around the same time last year, where he visited an Apple store and attended the China Development Forum.