Apple CEO Tim Cook is Fulfilling Another Steve Jobs Vision

Apple CEO Tim Cook. (Reuters)
Apple CEO Tim Cook. (Reuters)
TT

Apple CEO Tim Cook is Fulfilling Another Steve Jobs Vision

Apple CEO Tim Cook. (Reuters)
Apple CEO Tim Cook. (Reuters)

Apple co-founder Steve Jobs, who died in 2011, was a tough act to follow. But Tim Cook seems to be doing so well at it that his eventual successor may also have big shoes to fill.

Initially seen as a mere caretaker for the iconic franchise that Jobs built before his 2011 death, Cook has forged his own distinctive legacy. He will mark his ninth anniversary as Apple’s CEO Monday -- the same day the company will split its stock for the second time during his reign.

Grooming Cook as heir apparent was “one of Steve Jobs’ greatest accomplishments that is vastly underappreciated,” said long-time Apple analyst Gene Munster, who is now managing partner of Loup Ventures.

The upcoming four-for-one stock split, a move that has no effect on share price but often spurs investor enthusiasm, is one measure of Apple's success under Cook. The company was worth just under $400 billion when Cook the helm; it's worth five times more than that today, and has just become the first US company to boast a market value of $2 trillion. Its share performance has easily eclipsed the benchmark S&P 500, which has roughly tripled in value during the past nine years.

But it hasn't always been easy. Among the challenges Cook has faced: a slowdown in iPhone sales as smartphones matured, a showdown with the FBI over user privacy, a US trade war with China that threatened to force up iPhone prices and now a pandemic that has closed many of Apple's retail stores and sunk the economy into a deep recession.

Cook, 59, has also struck out in into novel territory. Apple now pays a quarterly dividend, a step Jobs resisted partly because he associated shareholder payments with stodgy companies that were past their prime. Cook also used his powerful perch to become an outspoken advocate for civil rights and renewable energy, and on a personal level came out as the first openly gay CEO of a Fortune 500 company in 2014.

Apple declined to make Cook available for an interview. But it did point to 2009 comments Cook made to financial analysts when he was running the company while Jobs battled pancreatic cancer.

Asked what the company might look like under his management, Cook said that Apple needs “to own and control the primary technologies behind the products we make." It has doubled down on that commitment, becoming a major chip producer in order to supply both iPhones and Macs. He added that Apple would resist exploring most projects “so that we can really focus on the few that are truly important and meaningful to us."

That laser focus has served Apple well. At the same time, though, under Cook's stewardship, Apple has largely failed to come up with breakthrough successors to the iPhone. Its smartwatch and wireless ear buds have emerged as market leaders, but not game-changers.

Cook and other executives have dropped hints that Apple wants make a big splash in the field of augmented reality, which uses phone screens or high-tech eyewear to paint digital images into the real world. Apple has yet to deliver, although neither have other companies that have hyped the technology.

Apple also remains a laggard in artificial intelligence, particularly in the increasingly important market for voice-activated digital assistants. Although Apple's Siri is widely used on Apple devices, Amazon's Alexa and Google’s digital assistant have made major inroads in helping people manage their lives, particularly in homes and offices.

Apple also has stumbled a few times under Cook's leadership.

In 2017, it alienated customers by deliberately but quietly slowing the performance of older iPhones via a software update, ostensibly to spare the life of aging batteries. Many consumers, though, viewed it as a ploy to boost sales of newer and more expensive iPhones. Amid the furor, Apple offered to replace aging batteries at a steep discount; later it paid $500 million to settle a class-action lawsuit over the matter.

Apple has also faced government investigations into its aggressive efforts to minimize its corporate taxes and complaints that it has abused control of its app store to charge excessive fees and stifle competition to its own digital services. On the tax front, a court ruled in July that Apple did nothing wrong.

Cook has turned the app store into the cornerstone of a services division that he set out to expand four years ago. At the time, it was growing clear that sales of the iPhone -- Apple’s biggest money maker -- were destined to slow down as innovations grew sparse and consumers kept their old devices for longer.

To help offset that trend, Cook began to emphasize recurring revenue from app commission, warranty programs and streaming subscriptions to music, video, games and news sold for the more 1.5 billion devices already running on the company’s software.

Apple’s services division now generates $50 billion in annual revenue, more than all but 65 companies in the Fortune 500. Ives estimates Apple’s services division by itself is worth about $750 billion -- about the same as Facebook currently is in its entirety.

That division could be worth even more now had Cook done something many analysts believe Apple should have done at least five years ago by dipping into a hoard of cash that at one point surpassed $260 billion to buy Netflix or a major movie studio to fuel its video streaming ambitions.

Buying Netflix seemed like within the realm of possibility five years ago when the video streaming service was valued at around $40 billion. Now that Netflix is worth more than $200 billion today, that idea seems off the table, even for a company with Apple's vast resources.



Nebius Signs AI Capacity Deal with Meta for at Least $12 Billion

FILE PHOTO: The logo of Nebius during the Viva Technology conference dedicated to innovation and startups at Porte de Versailles exhibition center in Paris, France, June 12, 2025. REUTERS/Benoit Tessier/File Photo
FILE PHOTO: The logo of Nebius during the Viva Technology conference dedicated to innovation and startups at Porte de Versailles exhibition center in Paris, France, June 12, 2025. REUTERS/Benoit Tessier/File Photo
TT

Nebius Signs AI Capacity Deal with Meta for at Least $12 Billion

FILE PHOTO: The logo of Nebius during the Viva Technology conference dedicated to innovation and startups at Porte de Versailles exhibition center in Paris, France, June 12, 2025. REUTERS/Benoit Tessier/File Photo
FILE PHOTO: The logo of Nebius during the Viva Technology conference dedicated to innovation and startups at Porte de Versailles exhibition center in Paris, France, June 12, 2025. REUTERS/Benoit Tessier/File Photo

Amsterdam-based Nebius Group said on Monday it has signed a new five-year deal with Meta Platforms to provide the social media giant with $12 billion of dedicated AI computing capacity ⁠across multiple locations by ⁠2027.

Under the deal, Meta will also buy an additional $15 billion worth of capacity planned by Nebius over ⁠the coming five years if it is not sold to other customers, giving the contract a total value of up to $27 billion, Nebius said.

Nebius is a so-called "neocloud" company that sells hardware and cloud capacity ⁠as ⁠services to other tech firms. It uses Nvidia processors to provide AI cloud infrastructure.

It signed an initial $3 billion deal with Meta in November.


ByteDance Reportedly Suspends Launch of Video AI Model after Copyright Disputes

FILE PHOTO: The ByteDance logo is seen at the company's office building in Shanghai, China July 4, 2023. REUTERS/Aly Song/File Photo
FILE PHOTO: The ByteDance logo is seen at the company's office building in Shanghai, China July 4, 2023. REUTERS/Aly Song/File Photo
TT

ByteDance Reportedly Suspends Launch of Video AI Model after Copyright Disputes

FILE PHOTO: The ByteDance logo is seen at the company's office building in Shanghai, China July 4, 2023. REUTERS/Aly Song/File Photo
FILE PHOTO: The ByteDance logo is seen at the company's office building in Shanghai, China July 4, 2023. REUTERS/Aly Song/File Photo

TikTok's Chinese parent, ByteDance, has put on hold the global launch of its latest video-generation model, Seedance 2.0, after a series of copyright disputes with major Hollywood studios and streaming platforms, The Information reported on Saturday, citing two people with direct knowledge of the situation.

Reuters could not immediately verify the report. ByteDance did not immediately respond to a request for comment. ByteDance said last month it would take steps to prevent the unauthorized use of intellectual property on its AI video generator Seedance 2.0, ⁠following threats of ⁠legal action from US studios, including Disney.

Disney sent a cease-and-desist letter to the Chinese firm last month, accusing it of using Disney characters to train and power Seedance 2.0 without permission, after videos generated by the model went viral in China, including one of Tom Cruise ⁠and Brad Pitt in a fight.

Disney said ByteDance had pre-packaged Seedance with a pirated library of copyrighted characters from franchises including Star Wars and Marvel, portraying them as public-domain clip art. ByteDance, which officially unveiled the model in February, has said the system is aimed at professional film, e-commerce and advertising use, highlighting its ability to process text, images, audio and video at once to reduce content production costs.

Seedance 2.0 has drawn attention after earning comparisons with DeepSeek, a ⁠Chinese AI ⁠company that has built models rivaling those of Anthropic and OpenAI. Tech executives, including Elon Musk, have praised its ability to generate cinematic storylines from a handful of prompts.

ByteDance had been aiming to make the new video model available to customers worldwide in mid-March, but the company has since suspended those plans, The Information report said.

ByteDance's legal team is working to identify and resolve potential legal issues and engineers are adding safeguards to prevent the model from generating content that could lead to further intellectual property violations, the report added.


Fake Images, Videos in Wartime: How to Tell Fact from Deepfakes

Misinformation spreads rapidly on social media during crises and conflicts (Shutterstock)
Misinformation spreads rapidly on social media during crises and conflicts (Shutterstock)
TT

Fake Images, Videos in Wartime: How to Tell Fact from Deepfakes

Misinformation spreads rapidly on social media during crises and conflicts (Shutterstock)
Misinformation spreads rapidly on social media during crises and conflicts (Shutterstock)

As tensions escalate across several fronts in the Middle East, information is spreading almost as quickly as the events themselves.

Social media platforms are often the first place where images, videos, and reports of alleged attacks or military developments appear.

But alongside legitimate information, a wave of misleading or fabricated content is also circulating online, making it increasingly difficult to separate fact from fiction.

A Growing Digital Challenge

Cybersecurity experts warn that the rapid spread of misinformation, particularly through manipulated videos and deepfake technologies, has become a growing digital threat during periods of geopolitical instability.

Maher Yamout, Lead Security Researcher at Kaspersky, told Asharq Al-Awsat that distinguishing reliable information from false narratives becomes especially critical during emergencies, when emotions run high, and people tend to share content quickly without verifying it.

“With developments unfolding in the Middle East, government authorities in Gulf Cooperation Council countries have warned against publishing or circulating information from unknown sources,” he said.

“Fake news, misleading or inaccurate information presented as real news, becomes more dangerous during emergencies.”

Misinformation Spreads Fast

Fake news is not new, but its scale and speed have changed dramatically with the rise of social media and artificial intelligence tools. During periods of geopolitical tension, unverified reports or manipulated videos can spread within minutes, reaching millions before fact-checkers can respond.

Experts generally divide fake news into two main categories. The first involves fully fabricated content designed to influence public opinion or attract traffic to specific websites. The second contains elements of truth but presents them inaccurately because the author failed to verify all the facts or exaggerated certain details.

Both can confuse audiences during crises, particularly when users rely on social media rather than trusted news outlets for updates.

Authorities in several countries have also warned that sharing inaccurate information, even unintentionally, may expose users to legal accountability.

Governments and digital security experts are therefore urging greater digital awareness and responsibility when sharing information during sensitive periods.

AI-Powered Deception

Artificial intelligence has added a new layer to the misinformation problem through so-called deepfake technologies, fabricated videos created using machine learning techniques such as face swapping or synthetic visual generation.

In some cases, authentic footage can be altered to appear as if it documents events that never occurred.

Yamout said verifying information has become more important than ever with the spread of deepfakes.

“Artificial intelligence makes it possible to combine different video clips to produce new scenes showing events or actions that never happened in reality, often with highly realistic results,” he said.

Such technology can make manipulated videos appear convincing and potentially mislead users, especially when they circulate in emotionally charged contexts. Edited clips may appear to show attacks, military movements, or political statements that never took place.

Even when these videos are later debunked, their initial spread can still trigger confusion or public anxiety.

How to Verify Information

Cybersecurity experts say users themselves play a key role in limiting the spread of misinformation. While platforms and regulators are developing tools to detect fake content, individuals can take simple steps to verify information before sharing it.

The first step is checking the source. Websites that publish false information may contain spelling errors in their web addresses or use unusual domains that mimic well-known media outlets.

Yamout advises carefully reviewing the website address and checking the “About Us” section on unfamiliar sites. It is generally safer to rely on official sources such as government websites or trusted media organizations.

Users should also verify the identity of the author or the organization behind the report. If the author is unknown or lacks clear expertise in the subject, the information should be treated cautiously.

Comparing reports with other credible sources is also important. Professional news organizations follow editorial guidelines and verification procedures, meaning major events are typically reported by multiple reputable outlets.

Yamout also highlighted the importance of checking dates and timelines, noting that some misleading content recirculates old events and presents them as recent developments.

He added that social media algorithms can create so-called “echo chambers,” where users are shown content that aligns with their existing views and interests. This makes it essential to consult diverse and reliable sources before forming conclusions.

Playing on Emotions

Many fake news stories are designed to provoke strong emotional reactions. Sensational headlines or dramatic clips are often crafted to trigger fear, anger, or shock, emotions that increase the likelihood that users will quickly share the content.

“Many fake news stories are written in a clever way to provoke strong emotional reactions,” Yamout said.

Maintaining critical thinking and asking a simple question — why was this story written? — can help users avoid spreading misinformation, he added.

This dynamic is amplified on social media platforms, where algorithms tend to promote content that generates strong engagement. Emotionally charged posts can therefore spread faster than balanced reporting.

Spotting Signs of Manipulation

Images and videos themselves may provide clues that they have been altered. Edited photos may display distorted background lines, unnatural shadows, or unrealistic skin tones.

In manipulated videos, inconsistencies may appear in lighting, eye movement, or facial expressions. While these signs are not always easy to detect, particularly on smartphones, they can raise doubts about the authenticity of widely shared clips.

A Shared Digital Responsibility

Experts say limiting the spread of misinformation during crises requires cooperation among governments, technology companies, media organizations, and users.

Yamout said the simplest rule may also be the most effective: “If you are not sure the content is accurate, do not share it.”

Responsible sharing can help curb the spread of misinformation and protect digital communities.

As digital platforms continue to shape how information travels across borders, the ability to critically evaluate online content is becoming an essential skill.

During periods of geopolitical tension and conflict, when rumors and facts can blur, the challenge is not only cybersecurity but also protecting the credibility of information itself.