Hi, Alexa. How Do I Stop You From Listening In On Me?

Photo: Glenn Harvey
Photo: Glenn Harvey
TT

Hi, Alexa. How Do I Stop You From Listening In On Me?

Photo: Glenn Harvey
Photo: Glenn Harvey

Many of us ask the digital companions in our homes, whether it’s Amazon’s Alexa, Apple’s Siri or Google Assistant, to handle innocuous tasks like setting a timer and playing music.

What most of us may not realize is that in some instances, there might be a person listening in, too.

In two separate reports in The Guardian and Bloomberg News, whistle-blowers recently said they had listened in on Siri recordings and Alexa activations that inadvertently recorded couples having sex and criminals making drug deals. Another publication, VRT, chronicled how a Google subcontractor shared more than 1,000 excerpts from Google recordings, which journalists then used to identify some individuals.

In the tech industry, it’s an open secret that artificial intelligence isn’t all that smart yet. It takes lots of people manually sifting through data to train the computing systems. That means humans occasionally cull through voice recordings to train Alexa, Siri and Google to understand the nuances of speech, such as distinguishing spoken words like “Austin” from “Boston,” or “U2” from “YouTube.”

But tech companies have been opaque in disclosing these practices to us. And they may also have overreached in the types of recordings that they gather.

Google, Apple and Amazon have since publicly said that less than 1 percent of recordings were subject to human review. Apple and Google also said that they suspended their human review programs, while Amazon expanded its Alexa assistant to include a suite of privacy controls.

That got me wondering: What can we do to protect our privacy with these smart assistants, short of chucking them into the recycling bin?

The good news is that there are steps we can take. Amazon and Google offer the ability to disable human vetting for their virtual assistants. Apple has said it plans to release a software update that will let people opt in to its program, which involves humans grading Siri samples for quality control, rather than being part of the program by default.

And there are other things we can do, such as deleting recordings and turning off sensors, to minimize the information shared with the companies.

Here’s a comprehensive guide on what to do to protect your privacy with each of the virtual assistants.

How to curtail Alexa data shared with Amazon
Among smart assistants, Alexa has the most comprehensive and straightforward set of privacy controls. Amazon recently released the Alexa privacy hub, which has a thorough explanation of the types of data collected by the virtual assistant and how to change its privacy settings.

Here’s how to opt out of human vetting:
Open the Alexa app on your smartphone and tap Settings and then select Alexa Privacy.

Tap Manage How Your Data Improves Alexa.

For the control that says Help Improve Amazon Services and Develop New Features, toggle the switch to the off position.

Here’s how to delete your voice recordings:
In the same Alexa Privacy menu, select Review Voice History.

In date range, select the time frame of recordings you want to delete, such as All History.

Tap Delete All Recordings for All History.

Here are other precautions to take with microphones and cameras:
Alexa devices include a physical button to disable their microphones. Hit the kill switch whenever you are having sensitive conversations. The device will illuminate with a red light to indicate that the microphone is off.

Some Alexa devices, like the Echo Spot alarm clock, have a built-in camera. The easiest way to disable it is to say, “Alexa, turn the camera off.” If you still feel uncomfortable with the camera, consider buying a cheap webcam cover that can slide over the lens.

Amazon said in a statement that it takes customer privacy seriously. “We continuously review our practices and procedures to ensure we’re providing customers with the best experiences and privacy choices,” the company said.

How to delete your Apple Siri recordings
Siri is the most lacking in privacy controls among the virtual assistants, and the process to manage user data is the least straightforward.

For example, Apple does not offer an option to let people opt in to its so-called grading program, though it has said it plans to do so in a future software update. In addition, there is no ability to review Siri recordings associated with your account, and deleting recordings is cumbersome.

Siri does take some steps to mask your identity. When you make requests with an iPhone, for example, the device associates those with a random identifier instead of your Apple account ID, according to the company. To reset that random identifier, you can turn off Siri and then turn it back on.

Disabling Siri will also delete your data associated with it, including recordings.

Here’s how to disable Siri on an iPhone to erase your data and reset your identifier:
Open the Settings app, then tap General, then Keyboards. In the Enable Dictation control, switch it to the off position.

Return to the Settings app. Select Siri & Search. Then disable the switches for Listen for “Hey Siri” and Press Side Button for Siri. You will then see a message asking if you want to disable Siri, which will remove your data from Apple’s servers. Tap Turn Off Siri and your Siri history will be deleted.

To re-enable Siri, go back to each of those settings and turn them back on.

Some of the sensitive recordings uploaded to Siri appeared to have come from unintentional activations, like when the crown of the Apple Watch was pressed down by accident, which summoned the assistant. (In my experience, this can happen when leaning a hand against a couch cushion.)

So here is a precaution to take with an Apple Watch:
To prevent the watch crown from triggering Siri, disable the Siri side button on the iPhone. In the Settings app, tap on Siri & Search, then toggle off Press Side Button for Siri. This will simultaneously disable the shortcut on the watch.

Apple declined to comment beyond an earlier statement announcing the suspension of its Siri-grading program.

How to protect your privacy on Google Home
Google offers some controls for tweaking privacy settings for Google Assistant on Android phones and Google Home smart speakers, among other products.

While Google’s human review program is suspended, you can still make sure you’re not a part of it by opting out. The search company also lets you automatically delete Google Assistant requests made after a period of time.

Here’s how to disable human reviews:
Visit Google’s web tool called Activity controls.

Scroll down to Voice & Audio Activity. Toggle this switch off.

Here’s how to set your recordings to automatically delete:
Again, visit Google’s Activity controls web tool.

Under Web & App Activity, click Manage Activity.

Click Choose to Delete Automatically. Then click Keep for 3 months then delete automatically.

A Google spokesman declined to comment and referred to a blog post, in which the company described its process of working with human language reviewers to improve speech recognition.

(The New York Times)



Amazon Says Blocked 1,800 North Koreans from Applying for Jobs

Amazon logo (Reuters)
Amazon logo (Reuters)
TT

Amazon Says Blocked 1,800 North Koreans from Applying for Jobs

Amazon logo (Reuters)
Amazon logo (Reuters)

US tech giant Amazon said it has blocked over 1,800 North Koreans from joining the company, as Pyongyang sends large numbers of IT workers overseas to earn and launder funds.

In a post on LinkedIn, Amazon's Chief Security Officer Stephen Schmidt said last week that North Korean workers had been "attempting to secure remote IT jobs with companies worldwide, particularly in the US".

He said the firm had seen nearly a one-third rise in applications by North Koreans in the past year, reported AFP.

The North Koreans typically use "laptop farms" -- a computer in the United States operated remotely from outside the country, he said.

He warned the problem wasn't specific to Amazon and "is likely happening at scale across the industry".

Tell-tale signs of North Korean workers, Schmidt said, included wrongly formatted phone numbers and dodgy academic credentials.

In July, a woman in Arizona was sentenced to more than eight years in prison for running a laptop farm helping North Korean IT workers secure remote jobs at more than 300 US companies.

The scheme generated more than $17 million in revenue for her and North Korea, officials said.

Last year, Seoul's intelligence agency warned that North Korean operatives had used LinkedIn to pose as recruiters and approach South Koreans working at defense firms to obtain information on their technologies.

"North Korea is actively training cyber personnel and infiltrating key locations worldwide," Hong Min, an analyst at the Korea Institute for National Unification, told AFP.

"Given Amazon's business nature, the motive seems largely economic, with a high likelihood that the operation was planned to steal financial assets," he added.

North Korea's cyber-warfare program dates back to at least the mid-1990s.

It has since grown into a 6,000-strong cyber unit known as Bureau 121, which operates from several countries, according to a 2020 US military report.

In November, Washington announced sanctions on eight individuals accused of being "state-sponsored hackers", whose illicit operations were conducted "to fund the regime's nuclear weapons program" by stealing and laundering money.

The US Department of the Treasury has accused North Korea-affiliated cybercriminals of stealing over $3 billion over the past three years, primarily in cryptocurrency.


KAUST Scientists Develop AI-Generated Data to Improve Environmental Disaster Tracking

King Abdullah University of Science and Technology (KAUST) logo
King Abdullah University of Science and Technology (KAUST) logo
TT

KAUST Scientists Develop AI-Generated Data to Improve Environmental Disaster Tracking

King Abdullah University of Science and Technology (KAUST) logo
King Abdullah University of Science and Technology (KAUST) logo

King Abdullah University of Science and Technology (KAUST) and SARsatX, a Saudi company specializing in Earth observation technologies, have developed computer-generated data to train deep learning models to predict oil spills.

According to KAUST, validating the use of synthetic data is crucial for monitoring environmental disasters, as early detection and rapid response can significantly reduce the risks of environmental damage.

Dean of the Biological and Environmental Science and Engineering Division at KAUST Dr. Matthew McCabe noted that one of the biggest challenges in environmental applications of artificial intelligence is the shortage of high-quality training data.

He explained that this challenge can be addressed by using deep learning to generate synthetic data from a very small sample of real data and then training predictive AI models on it.

This approach can significantly enhance efforts to protect the marine environment by enabling faster and more reliable monitoring of oil spills while reducing the logistical and environmental challenges associated with data collection.


Uber, Lyft to Test Baidu Robotaxis in UK from Next Year 

A sign of Baidu is pictured at the company's headquarters in Beijing, China March 16, 2023. (Reuters)
A sign of Baidu is pictured at the company's headquarters in Beijing, China March 16, 2023. (Reuters)
TT

Uber, Lyft to Test Baidu Robotaxis in UK from Next Year 

A sign of Baidu is pictured at the company's headquarters in Beijing, China March 16, 2023. (Reuters)
A sign of Baidu is pictured at the company's headquarters in Beijing, China March 16, 2023. (Reuters)

Uber Technologies and Lyft are teaming up with Chinese tech giant Baidu to try out driverless taxis in the UK next year, marking a major step in the global race to commercialize robotaxis.

It highlights how ride-hailing platforms are accelerating autonomous rollout through partnerships, positioning London as an early proving ground for large-scale robotaxi services ‌in Europe.

Lyft, meanwhile, plans ‌to deploy Baidu's ‌autonomous ⁠vehicles in Germany ‌and the UK under its platform, pending regulatory approval. Both companies have abandoned in-house development of autonomous vehicles and now rely on alliances to accelerate adoption.

The partnerships underscore how global robotaxi rollouts are gaining momentum. ⁠Alphabet's Waymo said in October it would start ‌tests in London this ‍month, while Baidu ‍and WeRide have launched operations in the ‍Middle East and Switzerland.

Robotaxis promise safer, greener and more cost-efficient rides, but profitability remains uncertain. Public companies like Pony.ai and WeRide are still loss-making, and analysts warn the economics of expensive fleets could pressure margins ⁠for platforms such as Uber and Lyft.

Analysts have said hybrid networks, mixing robotaxis with human drivers, may be the most viable model to manage demand peaks and pricing.

Lyft completed its $200 million acquisition of European taxi app FreeNow from BMW and Mercedes-Benz in July, marking its first major expansion beyond North America and ‌giving the US ride-hailing firm access to nine countries across Europe.