From Algorithms to AI: A 25-Year Journey of Human Advancement

A facial recognition system using AI. Getty
A facial recognition system using AI. Getty
TT
20

From Algorithms to AI: A 25-Year Journey of Human Advancement

A facial recognition system using AI. Getty
A facial recognition system using AI. Getty

Over the past 25 years, technological innovation has accelerated unprecedentedly, transforming societies worldwide. Historically, technologies like electricity and the telephone took decades to reach 25% of US households—46 and 35 years respectively. In stark contrast, the internet did so in just seven years. Platforms like Facebook gained 50 million users in two years, Netflix redefined media consumption rapidly, and ChatGPT attracted over a million users in merely five days. This rapid adoption underscores both technological advancements and a societal shift in embracing innovation.

Leading this wave was Google, a startup founded in a garage. In 1998, Google introduced the PageRank algorithm, revolutionizing web information organization. Unlike traditional search engines focusing on keyword frequency, PageRank assessed page importance by analyzing interlinking, treating hyperlinks as votes of confidence and capturing collective internet wisdom. Finding relevant information became faster and more intuitive, making Google’s search engine indispensable globally.

Amid the data revolution, a new computing paradigm emerged: machine learning. Developers began creating algorithms that learn from data and improve over time, moving away from explicit programming. Netflix exemplified this shift with its 2006 prize offering $1 million for a 10% improvement in its recommendation algorithm. In 2009, BellKor’s Pragmatic Chaos succeeded using advanced machine learning, highlighting the power of adaptive algorithms.

Researchers then delved into deep learning, a subset of machine learning involving algorithms learning from vast unstructured data. In 2011, IBM’s Watson showcased deep learning’s power on “Jeopardy!” Competing against champions Brad Rutter and Ken Jennings, Watson demonstrated an ability to understand complex language nuances, puns, and riddles, securing victory. This significant demonstration of AI’s language processing paved the way for numerous natural language processing applications.

In 2016, Google DeepMind’s AlphaGo achieved a historic milestone by defeating Go world champion Lee Sedol. Go, known for its complexity and intuitive thinking, had been beyond AI’s reach. AlphaGo’s victory astonished the world, signaling that AI could tackle problems requiring strategic thinking through neural networks.

As AI capabilities grew, businesses began integrating these technologies to innovate. Amazon revolutionized retail by harnessing AI for personalized shopping. By analyzing customers’ habits, Amazon’s algorithms recommended products accurately, streamlined logistics, and optimized inventory. Personalization became a cornerstone of Amazon’s success, setting new customer service expectations.

In the automotive sector, Tesla led in integrating AI into consumer products. With Autopilot, Tesla offered a glimpse into transportation’s future. Initially, Autopilot used AI to process data from cameras and sensors, enabling adaptive cruise control, lane centering, and self-parking. By 2024, Full Self-Driving (FSD) allowed cars to navigate with minimal human intervention. This leap redefined driving and accelerated efforts to develop self-driving vehicles like Waymo’s.

Healthcare also witnessed AI’s transformative impact. Researchers developed algorithms detecting patterns in imaging data imperceptible to humans. For example, an AI system analyzed mammograms to identify subtle changes predictive of cancer, enabling earlier interventions and potentially saving lives.

In 2020, DeepMind’s AlphaFold achieved a breakthrough: accurately predicting protein structures from amino acid sequences—a challenge that had eluded scientists for decades. Understanding protein folding is crucial for drug discovery and disease research. DeepMind’s spin-off, Isomorphic Labs, is leveraging the latest AlphaFold models and partnering with major pharmaceutical companies to accelerate biomedical research, potentially leading to new treatments at an unprecedented pace.

The finance industry quickly embraced AI. PayPal implemented advanced algorithms to detect and prevent fraud in real time, building trust in digital payments. High-frequency trading firms utilized algorithms executing trades in fractions of a second. Companies like Renaissance Technologies used machine learning for trading strategies, achieving remarkable returns. Algorithmic trading now accounts for a significant portion of trading volume, increasing efficiency but raising concerns about market stability, as seen in the 2010 Flash Crash.

In 2014, Ian Goodfellow and colleagues developed Generative Adversarial Networks (GANs), consisting of two neural networks—the generator and discriminator—that compete against each other. This dynamic enabled creating highly realistic synthetic data, including images and videos. GANs have generated lifelike human faces, created art, and assisted in medical imaging by producing synthetic data for training, enhancing diagnostic models’ robustness.

In 2017, Transformer architectures introduced a significant shift in AI methodology, fundamentally changing natural language processing. Developed by Google Brain researchers, Transformers moved away from traditional recurrent and convolutional neural networks. They rely entirely on attention mechanisms to capture global dependencies, allowing efficient parallelization and handling longer contexts.

Building on this, OpenAI developed the Generative Pre-trained Transformer (GPT) series. GPT-3, released in 2020, demonstrated unprecedented capabilities in generating human-like text and understanding context. Unlike previous models requiring task-specific training, GPT-3 could perform a wide range of language tasks with minimal fine-tuning, showcasing the power of large-scale unsupervised pre-training and few-shot learning. Businesses began integrating GPT models into applications from content creation and code generation to customer service. Currently, multiple models are racing to achieve “artificial general intelligence” (AGI) that understands, reasons, and creates content superior to humans.

The journey from algorithms to AI over the past 25 years is a testament to the seemingly limitless human curiosity, creativity, and relentless pursuit of progress. We’ve moved from basic algorithms to sophisticated AI systems that understand language, interpret complex data, and exhibit creativity. Exponential growth in computational power, big data, and breakthroughs in machine learning have accelerated AI development at an unimaginable pace.

Looking ahead, predicting the next 25 years is challenging. As AI advances, it may unlock solutions to challenges we perceive as insurmountable—from curing diseases and solving energy problems to mitigating climate change and exploring deep space. AI’s potential to revolutionize every aspect of our lives is vast. While the exact trajectory is uncertain, the fusion of human ingenuity and AI promises a future rich with possibilities. One wonders when and where the next Google or OpenAI may emerge and what significant good it may bring to the world!



Video Game Actors Are Voting on a New Contract. Here’s What It Means for AI in Gaming

A picketer holds a sign for the SAG-AFTRA video game strike at Warner Bros. Games headquarters on Aug. 1, 2024, in Burbank, Calif. (AP)
A picketer holds a sign for the SAG-AFTRA video game strike at Warner Bros. Games headquarters on Aug. 1, 2024, in Burbank, Calif. (AP)
TT
20

Video Game Actors Are Voting on a New Contract. Here’s What It Means for AI in Gaming

A picketer holds a sign for the SAG-AFTRA video game strike at Warner Bros. Games headquarters on Aug. 1, 2024, in Burbank, Calif. (AP)
A picketer holds a sign for the SAG-AFTRA video game strike at Warner Bros. Games headquarters on Aug. 1, 2024, in Burbank, Calif. (AP)

An 11-month strike by video game performers could formally end this week if members ratify a deal that delivers pay raises, control over their likenesses and artificial intelligence protections.

The agreement feels "like diamond amounts of pressure suddenly lifted," said Sarah Elmaleh, a voice actor and chair of the Screen Actors Guild-American Federation of Television and Radio Artists' interactive branch negotiating committee.

Union members have until Wednesday at 5 p.m. Pacific to vote on ratifying the tentative agreement.

Voice and body performers for video games raised concerns that unregulated use of AI could displace them and threaten their artistic autonomy.

"It’s obviously far from resolved," Elmaleh said. "But the idea that that we’re in a zone where we might have concluded this feels like a lightening and a relief."

AI concerns are especially dire in the video game industry, where human performers infuse characters with distinctive movements, shrieks, falls and plot-twisting dialogue.

"I hope and I believe that our members, when they look back on this, will say all of the sacrifices and difficulty we put ourselves through to achieve this agreement will ultimately be worth it because we do have the key elements that we need to feel confident and moving forward in this business," said Duncan Crabtree-Ireland, the SAG-AFTRA national executive director and chief negotiator.

Here’s a look at the contract currently up for vote, and what it means for the future of the video game industry.

How did the current strike play out? Video game performers went on strike last July following nearly two years of failed negotiations with major game studios, as both sides remained split over generative AI regulations.

More than 160 games signed interim agreements accepting AI provisions SAG-AFTRA was seeking, the union said, which allowed some work to continue.

The video game industry is a massive global industry, generating an estimated $187 billion in 2024, according to game market forecaster Newzoo.

"OD," and "Physint" were two games delayed due to the strike during the filming and casting stage, video game developer Hideo Kojima wrote in December. Riot Games, a video game developer, announced that same month that some new skins in "League of Legends" would have to use existing voice-overs, since new content couldn't be recorded by striking actors. Skins are cosmetic items that can change the visual appearance of a player and is sometimes equipped with new voice-overs and unique recorded lines.

The proposed contract "builds on three decades of successful partnership between the interactive entertainment industry and the union" to deliver "historic wage increases" and "industry-leading AI provisions," wrote Audrey Cooling, a spokesperson for the video game producers involved in the deal.

"We look forward to continuing to work with performers to create new and engaging entertainment experiences for billions of players throughout the world," Cooling wrote.

Video game performers had previously gone on strike in October 2016, with a tentative deal reached 11 months later. That strike helped secure a bonus compensation structure for voice actors and performance capture artists. The agreement was ratified with 90% support, with 10% of members voting.

The proposed contract secures an increase in performer compensation of just over 15% upon ratification and an additional 3% increase each year of the three-year contract.

How would AI use change in video games? AI concerns have taken center stage as industries across various sectors attempt to keep up with the fast-evolving technology. It’s a fight that Hollywood writers and actors undertook during the historic film and TV strikes that forced the industry to a stop in 2023.

"In the last few years, it’s become obvious that we are at an inflection point where rules of the road have to be set for AI, and if they aren’t, the consequences are potentially very serious," Crabtree-Ireland said. "I think that really made this negotiation extra important for all of us."

SAG-AFTRA leaders have billed the issues behind the labor dispute — and AI in particular — as an existential crisis for performers. Game voice actors and motion capture artists’ likenesses, they say, could be replicated by AI and used without their consent and without fair compensation.

The proposed contract delineates clear restrictions on when and how video game companies can create digital replicas, which use AI to generate new performances that weren't recorded by an actor.

Employers must obtain written permission from a performer to create a digital replica — consent which must be granted during the performer’s lifetime and is valid after death unless otherwise limited, the contract states. The time spent creating a digital replica will be compensated as the same amount of work time it would have required for a new performance.

The agreement also requires the employer to provide the performer with a usage report that details how the replica was used and calculates the expected compensation.

Elmaleh, who has been voice acting since 2010 and had to turn down projects throughout the strike, said securing these gains required voice actors bring vulnerability and openness to the bargaining table.

"We talked a lot about the personal, the way it affects our displacement as workers and just the sustainability of our careers," Elmaleh said. "Our work involves your inner child. It’s being very vulnerable, it’s being playful."

What’s next for the video game industry? The tentative agreement centers on consent, compensation and transparency, which union leaders say are key elements needed for the industry to keep progressing.

As the contract is considered by union members, Elmaleh and Crabtree-Ireland said further work needs to be done to ensure the provisions are as broad as necessary.

"Even though there’s a deal that’s been made now, and we’ve locked in a lot of really crucial protections and guardrails, the things that we haven’t been able to achieve yet, we’re going to be continuing to fight for them," Crabtree-Ireland said. "Every time these contracts expire is our chance to improve upon them."

Elmaleh said she hopes both the video game companies and performers can soon work collaboratively to develop guidelines on AI as the technology evolves — a process she said should start well the proposed contract would expire in October 2028.

Leading negotiations has felt like a full-time job for Elmaleh, who took on the role in a volunteer capacity. As the efforts die down, she said she anxiously anticipates returning to video game acting in a landscape that is safer for performers.

Voice acting "is core to who I am. It’s why I fought so hard for this. I wouldn’t do this if I didn’t love what I do so much. I think it’s so special and worthy of protection," she said.