Mark Zuckerberg, AI's 'Open Source' Evangelist

FILE PHOTO: Meta's CEO Mark Zuckerberg testifies during the Senate Judiciary Committee hearing on online child sexual exploitation at the US Capitol, in Washington, US, January 31, 2024. REUTERS/Nathan Howard/File Photo
FILE PHOTO: Meta's CEO Mark Zuckerberg testifies during the Senate Judiciary Committee hearing on online child sexual exploitation at the US Capitol, in Washington, US, January 31, 2024. REUTERS/Nathan Howard/File Photo
TT
20

Mark Zuckerberg, AI's 'Open Source' Evangelist

FILE PHOTO: Meta's CEO Mark Zuckerberg testifies during the Senate Judiciary Committee hearing on online child sexual exploitation at the US Capitol, in Washington, US, January 31, 2024. REUTERS/Nathan Howard/File Photo
FILE PHOTO: Meta's CEO Mark Zuckerberg testifies during the Senate Judiciary Committee hearing on online child sexual exploitation at the US Capitol, in Washington, US, January 31, 2024. REUTERS/Nathan Howard/File Photo

Mark Zuckerberg, the founder of Facebook and CEO of Meta, has become an unexpected evangelist for open source technology when it comes to developing artificial intelligence, pitting him against OpenAI and Google.
The 40-year-old tech tycoon laid out his vision in an open letter titled "Open Source AI is the Path Forward" this week. Here is what you need to know about the open versus closed model AI debate, said Agence France Presse.
What is 'open source'?
The history of computer technology has long pitted open source aficionados against companies clinging to their intellectual property.
"Open source" refers to software development where the program code is made freely available to the public, allowing developers to tinker and build on it as they wish.
Many of the internet's foundational technologies, such as the Linux operating system and the Apache web server, are products of open source development.
However, open source is not without challenges. Maintaining large projects, ensuring consistent quality, and managing a wide range of contributors can be complex.
Finally, almost by definition, keeping open source projects financially sustainable is a challenge.
Why is Meta AI 'open source'?
Zuckerberg is probably the last person you would expect to embrace open source.
The company maintains total control over its Instagram and Facebook platforms, leaving little to no leeway for outside developers or researchers to tinker around.
The Cambridge Analytica scandal, in which an outside vendor was revealed in 2018 to be using the platform to gather user information for nefarious practices, only made the company more protective.
Meta's sudden embrace of the open source ethos is driven by its bitterness towards Apple, whose iPhone rules keep a tight control on what Meta and all outside apps can do on their devices.
"One of my formative experiences has been building our services constrained by what Apple will let us build on their platforms," Zuckerberg said.
“Between the way they tax developers, the arbitrary rules they apply, and all the product innovations they block from shipping, it's clear that Meta and many other companies would be freed up if...competitors were not able to constrain what we could build,” he wrote.
That concern has now spread to generative AI, but this time it is Microsoft-backed OpenAI and Google that are the closed-fence culprits that charge developers and keep a tight lid on their AI technology.
Doubters argue that Meta is embracing open source because it came late to the AI party, and is seeking to blow open the field with free access to a powerful model.
What is Llama?
Meta's open source LLaMA 3.1 (for Large Language Model Meta AI) is the company’s latest version of its generative AI technology that can spew out human standard content in just seconds.
Performance-wise, it can be compared to OpenAI’s GPT-4 or Google’s Gemini, and like those models is "trained" before deployment by ingesting data from the internet.
But unlike those models, developers can access the technology for free, and make adaptations as they see fit for their specific use cases.
Meta says that LLaMA 3.1 is as good as the best models out there, but unlike its main rivals, it only deals with text, with the company saying it will later match the others with images, audio and video.
Security threat
In the rivalry over generative AI, defenders of the closed model argue that the Meta way is dangerous, as it allows bad actors to weaponize the powerful technology.
In Washington, lobbyists argue over the distinction, with opponents to open source insisting that models like Llama can be weaponized by countries like China.
Meta argues that transparency assures a more level playing field and that a world of closed models will ensure that only a few big companies, and a powerhouse nation like China, will be in control.
Startups, universities, and small businesses will "miss out on opportunities," Zuckerberg said.



Justice at Stake as Generative AI Enters the Courtroom

Generative artificial intelligence has been used in the US legal system by judges performing research, lawyers filing appeals and parties involved in cases who wanted help expressing themselves in court. Jefferson Siegel / POOL/AFP
Generative artificial intelligence has been used in the US legal system by judges performing research, lawyers filing appeals and parties involved in cases who wanted help expressing themselves in court. Jefferson Siegel / POOL/AFP
TT
20

Justice at Stake as Generative AI Enters the Courtroom

Generative artificial intelligence has been used in the US legal system by judges performing research, lawyers filing appeals and parties involved in cases who wanted help expressing themselves in court. Jefferson Siegel / POOL/AFP
Generative artificial intelligence has been used in the US legal system by judges performing research, lawyers filing appeals and parties involved in cases who wanted help expressing themselves in court. Jefferson Siegel / POOL/AFP

Generative artificial intelligence (GenAI) is making its way into courts despite early stumbles, raising questions about how it will influence the legal system and justice itself.

Judges use the technology for research, lawyers utilize it for appeals and parties involved in cases have relied on GenAI to help express themselves in court.

"It's probably used more than people expect," said Daniel Linna, a professor at the Northwestern Pritzker School of Law, about GenAI in the US legal system.

"Judges don't necessarily raise their hand and talk about this to a whole room of judges, but I have people who come to me afterward and say they are experimenting with it”.

In one prominent instance, GenAI enabled murder victim Chris Pelkey to address an Arizona courtroom -- in the form of a video avatar -- at the sentencing of the man convicted of shooting him dead in 2021 during a clash between motorists.

"I believe in forgiveness," said a digital proxy of Pelkey created by his sister, Stacey Wales.

The judge voiced appreciation for the avatar, saying it seemed authentic.

"I knew it would be powerful," Wales told , "that that it would humanize Chris in the eyes of the judge."

The AI testimony, a first of its kind, ended the sentencing hearing at which Wales and other members of the slain man's family spoke about the impact of the loss.

Since the hearing, examples of GenAI being used in US legal cases have multiplied.

"It is a helpful tool and it is time-saving, as long as the accuracy is confirmed," said attorney Stephen Schwartz, who practices in the northeastern state of Maine.

"Overall, it's a positive development in jurisprudence."

Schwartz described using ChatGPT as well as GenAI legal assistants, such as LexisNexis Protege and CoCounsel from Thomson Reuters, for researching case law and other tasks.

"You can't completely rely on it," Schwartz cautioned, recommending that cases proffered by GenAI be read to ensure accuracy.

"We are all aware of a horror story where AI comes up with mixed-up case things."

The technology has been the culprit behind false legal citations, far-fetched case precedents, and flat-out fabrications.

In early May, a federal judge in Los Angeles imposed $31,100 in fines and damages on two law firms for an error-riddled petition drafted with the help of GenAI, blasting it as a "collective debacle."

The tech is also being relied on by some who skip lawyers and represent themselves in court, often causing legal errors.

And as GenAI makes it easier and cheaper to draft legal complaints, courts already overburdened by caseloads could see them climb higher, said Shay Cleary of the National Center for State Courts.

"Courts need to be prepared to handle that," Cleary said.

Transformation

Law professor Linna sees the potential for GenAI to be part of the solution though, giving more people the ability to seek justice in courts made more efficient.

"We have a huge number of people who don't have access to legal services," Linna said.

"These tools can be transformative; of course we need to be thoughtful about how we integrate them."

Federal judges in the US capitol have written decisions noting their use of ChatGPT in laying out their opinions.

"Judges need to be technologically up-to-date and trained in AI," Linna said.

GenAI assistants already have the potential to influence the outcome of cases the same way a human law clerk might, reasoned the professor.

Facts or case law pointed out by GenAI might sway a judge's decision, and could be different than what a legal clerk would have come up with.

But if GenAI lives up to its potential and excels at finding the best information for judges to consider, that could make for well-grounded rulings less likely to be overturned on appeal, according to Linna.