Use of AI in Fighting Crime Stirs Privacy Concerns

Privacy and civil liberties activists voiced concern over the use of AI in fighting crime. (Reuters)
Privacy and civil liberties activists voiced concern over the use of AI in fighting crime. (Reuters)
TT

Use of AI in Fighting Crime Stirs Privacy Concerns

Privacy and civil liberties activists voiced concern over the use of AI in fighting crime. (Reuters)
Privacy and civil liberties activists voiced concern over the use of AI in fighting crime. (Reuters)

Police in the US state of Delaware are poised to deploy "smart" cameras in cruisers to help authorities detect a vehicle carrying a fugitive, missing child or straying senior, however, this has stirred concerns among privacy and civil liberties activists, who fear the technology could lead to the misuse of data.

According to David Hinojosa of Coban Technologies, the company providing the equipment, the video feeds will be analyzed using artificial intelligence (AI) to identify vehicles by license plate or other features and "give an extra set of eyes" to officers on patrol.

"We are helping officers keep their focus on their jobs," said Hinojosa, who touts the new technology as a "dash cam on steroids,” reported the German news agency (dpa).

The program is part of a growing trend to use vision-based AI to thwart crime and improve public safety, a trend which has stirred concerns among privacy and civil liberties activists who fear the technology could lead to secret "profiling" and abuse of data.

US-based startup Deep Science is using the same technology to help retail stores detect in real time if an armed robbery is in progress, by identifying guns or masked assailants.

Deep Science has pilot projects with US retailers, enabling automatic alerts in the case of robberies, fire or other threats. The technology can monitor for threats more efficiently and at a lower cost than human security guards, according to Deep Science co-founder Sean Huver, a former engineer for DARPA, the Pentagon's long-term research arm.

Until recently, most predictive analytics relied on inputting numbers and other data to interpret trends. But advances in visual recognition are now being used to detect firearms, specific vehicles or individuals to help law enforcement and private security.

Saurabh Jain is product manager for the computer graphics group Nvidia, which makes computer chips for such systems and which held a recent conference in Washington with its technology partners.

He said the same computer vision technologies are used for self-driving vehicles, drones and other autonomous systems, to recognize and interpret the surrounding environment.

Nvidia has some 50 partners who use its super-computing module called Jetson or its Metropolis software for security and related applications, according to Jain.

One of those partners, California-based Umbo Computer Vision, has developed an AI-enhanced security monitoring system which can be used at schools, hotels or other locations, analyzing video to detect intrusions and threats in real-time, and sending alerts to a security guard's computer or phone.

Russia-based startup Vision Labs employs the Nvidia technology for facial recognition systems that can be used to identify potential shoplifters or problem customers in casinos or other locations.

Vadim Kilimnichenko, project manager at Vision Labs, said the company works with law enforcement in Russia as well as commercial clients.

"We can deploy this anywhere through the cloud," he said.

Customers of Vision labs include banks seeking to prevent fraud, which can use face recognition to determine if someone is using a false identity, Kilimnichenko said.

For Marc Rotenberg, president of the Electronic Privacy Information Center, the rapid growth in these technologies raises privacy risks and calls for regulatory scrutiny over how data is stored and applied.

"Some of these techniques can be helpful but there are huge privacy issues when systems are designed to capture identity and make a determination based on personal data," Rotenberg said. "That's where issues of secret profiling, bias and accuracy enter the picture."

Rotenberg said the use of AI systems in criminal justice calls for scrutiny to ensure legal safeguards, transparency and procedural rights.

In a blog post earlier this year, Shelly Kramer of Futurum Research argued that AI holds great promise for law enforcement, be it for surveillance, scanning social media for threats, or using "bots" as lie detectors.

"With that encouraging promise, though, comes a host of risks and responsibilities," she added.



KAUST Scientists Develop AI-Generated Data to Improve Environmental Disaster Tracking

King Abdullah University of Science and Technology (KAUST) logo
King Abdullah University of Science and Technology (KAUST) logo
TT

KAUST Scientists Develop AI-Generated Data to Improve Environmental Disaster Tracking

King Abdullah University of Science and Technology (KAUST) logo
King Abdullah University of Science and Technology (KAUST) logo

King Abdullah University of Science and Technology (KAUST) and SARsatX, a Saudi company specializing in Earth observation technologies, have developed computer-generated data to train deep learning models to predict oil spills.

According to KAUST, validating the use of synthetic data is crucial for monitoring environmental disasters, as early detection and rapid response can significantly reduce the risks of environmental damage.

Dean of the Biological and Environmental Science and Engineering Division at KAUST Dr. Matthew McCabe noted that one of the biggest challenges in environmental applications of artificial intelligence is the shortage of high-quality training data.

He explained that this challenge can be addressed by using deep learning to generate synthetic data from a very small sample of real data and then training predictive AI models on it.

This approach can significantly enhance efforts to protect the marine environment by enabling faster and more reliable monitoring of oil spills while reducing the logistical and environmental challenges associated with data collection.


Uber, Lyft to Test Baidu Robotaxis in UK from Next Year 

A sign of Baidu is pictured at the company's headquarters in Beijing, China March 16, 2023. (Reuters)
A sign of Baidu is pictured at the company's headquarters in Beijing, China March 16, 2023. (Reuters)
TT

Uber, Lyft to Test Baidu Robotaxis in UK from Next Year 

A sign of Baidu is pictured at the company's headquarters in Beijing, China March 16, 2023. (Reuters)
A sign of Baidu is pictured at the company's headquarters in Beijing, China March 16, 2023. (Reuters)

Uber Technologies and Lyft are teaming up with Chinese tech giant Baidu to try out driverless taxis in the UK next year, marking a major step in the global race to commercialize robotaxis.

It highlights how ride-hailing platforms are accelerating autonomous rollout through partnerships, positioning London as an early proving ground for large-scale robotaxi services ‌in Europe.

Lyft, meanwhile, plans ‌to deploy Baidu's ‌autonomous ⁠vehicles in Germany ‌and the UK under its platform, pending regulatory approval. Both companies have abandoned in-house development of autonomous vehicles and now rely on alliances to accelerate adoption.

The partnerships underscore how global robotaxi rollouts are gaining momentum. ⁠Alphabet's Waymo said in October it would start ‌tests in London this ‍month, while Baidu ‍and WeRide have launched operations in the ‍Middle East and Switzerland.

Robotaxis promise safer, greener and more cost-efficient rides, but profitability remains uncertain. Public companies like Pony.ai and WeRide are still loss-making, and analysts warn the economics of expensive fleets could pressure margins ⁠for platforms such as Uber and Lyft.

Analysts have said hybrid networks, mixing robotaxis with human drivers, may be the most viable model to manage demand peaks and pricing.

Lyft completed its $200 million acquisition of European taxi app FreeNow from BMW and Mercedes-Benz in July, marking its first major expansion beyond North America and ‌giving the US ride-hailing firm access to nine countries across Europe.


Italy Fines Apple Nearly 100m Euros over App Privacy Feature

An Apple logo hangs above the entrance to the Apple store on 5th Avenue in the Manhattan borough of New York City, July 21, 2015. REUTERS/Mike Segar/File Photo Purchase Licensing Rights
An Apple logo hangs above the entrance to the Apple store on 5th Avenue in the Manhattan borough of New York City, July 21, 2015. REUTERS/Mike Segar/File Photo Purchase Licensing Rights
TT

Italy Fines Apple Nearly 100m Euros over App Privacy Feature

An Apple logo hangs above the entrance to the Apple store on 5th Avenue in the Manhattan borough of New York City, July 21, 2015. REUTERS/Mike Segar/File Photo Purchase Licensing Rights
An Apple logo hangs above the entrance to the Apple store on 5th Avenue in the Manhattan borough of New York City, July 21, 2015. REUTERS/Mike Segar/File Photo Purchase Licensing Rights

Italy's competition authority said Monday it had fined US tech giant Apple 98 million euros ($115 million) for allegedly abusing its dominant position in the mobile app market.

According to AFP, the AGCM said in a statement that Apple had violated privacy regulations for third-party developers in a market where it "holds a super-dominant position through its App Store".

The body said its investigation had established the "restrictive nature" of the "privacy rules imposed by Apple... on third-party developers of apps distributed through the App Store".

The rules of Apple's App Tracking Transparency (ATT) "are imposed unilaterally and harm the interests of Apple's commercial partners", according to the AGCM statement.

French antitrust authorities earlier this year handed Apple a 150-million euro fine over its app tracking privacy feature.

Authorities elsewhere in Europe have also opened similar probes over ATT, which Apple promotes as a privacy safeguard.

The feature, introduced by Apple in 2021, requires apps to obtain user consent through a pop-up window before tracking their activity across other apps and websites.

If they decline, the app loses access to information on that user which enables ad targeting.

Critics have accused Apple of using the system to promote its own advertising services while restricting competitors.