Drivers More Likely to Be Distracted While Using Partial Automation Tech, Study Shows 

Cars are stuck in traffic after police blocked the road in West Palm Beach, Florida, on September 15, 2024 following a shooting incident at former US president Donald Trump's golf course. (AFP)
Cars are stuck in traffic after police blocked the road in West Palm Beach, Florida, on September 15, 2024 following a shooting incident at former US president Donald Trump's golf course. (AFP)
TT

Drivers More Likely to Be Distracted While Using Partial Automation Tech, Study Shows 

Cars are stuck in traffic after police blocked the road in West Palm Beach, Florida, on September 15, 2024 following a shooting incident at former US president Donald Trump's golf course. (AFP)
Cars are stuck in traffic after police blocked the road in West Palm Beach, Florida, on September 15, 2024 following a shooting incident at former US president Donald Trump's golf course. (AFP)

Drivers are more likely to engage in non-driving activities, such as checking their phones or eating a sandwich, when using partial automation systems, with some easily skirting rules set to limit distractions, new research showed on Tuesday.

Insurance Institute for Highway Safety (IIHS) conducted month-long studies with two such systems - Tesla's Autopilot and Volvo's Pilot Assist - to examine driver behavior when the technology was in use and how it evolved over time.

While launching and commercializing driverless taxis have been tougher than expected, major automakers are in a race to deploy technology that partially automates routine driving tasks to make it easier and safer for drivers, and generate revenue for the companies.

The rush has sparked concerns and litigation around the dangers of driver distraction and crashes involving such technology.

The studies show better safeguards are needed to ensure attentive driving, IIHS said in the report.

Partial automation - a level of "advanced driver assistance systems" - uses cameras, sensors and software to regulate the speed of the car based on other vehicles on the road and keep it in the center of the lane. Some enable lane changing automatically or when prompted.

Drivers, however, are required to continuously monitor the road and be ready to take over at any time, with most systems needing them to keep their hands on the wheel.

"These results are a good reminder of the way people learn," said IIHS President David Harkey. "If you train them to think that paying attention means nudging the steering wheel every few seconds, then that's exactly what they'll do."

"In both these studies, drivers adapted their behavior to engage in distracting activities," Harkey said. "This demonstrates why partial automation systems need more robust safeguards to prevent misuse."

The study with Tesla's Autopilot used 14 people who drove over 12,000 miles (19,300 km) with the system, triggering 3,858 attention-related warnings. On average, drivers responded in about three seconds, usually by nudging the steering wheel, mostly preventing an escalation.

The study with Volvo's Pilot Assist had 29 volunteers who were found to be distracted for 30% of the time while using the system - "exceedingly high" according to the authors.



AI Experts Ready ‘Humanity’s Last Exam’ to Stump Powerful Tech

Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. (Reuters)
Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. (Reuters)
TT

AI Experts Ready ‘Humanity’s Last Exam’ to Stump Powerful Tech

Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. (Reuters)
Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. (Reuters)

A team of technology experts issued a global call on Monday seeking the toughest questions to pose to artificial intelligence systems, which increasingly have handled popular benchmark tests like child's play.

Dubbed "Humanity's Last Exam," the project seeks to determine when expert-level AI has arrived. It aims to stay relevant even as capabilities advance in future years, according to the organizers, a non-profit called the Center for AI Safety (CAIS) and the startup Scale AI.

The call comes days after the maker of ChatGPT previewed a new model, known as OpenAI o1, which "destroyed the most popular reasoning benchmarks," said Dan Hendrycks, executive director of CAIS and an advisor to Elon Musk's xAI startup.

Hendrycks co-authored two 2021 papers that proposed tests of AI systems that are now widely used, one quizzing them on undergraduate-level knowledge of topics like US history, the other probing models' ability to reason through competition-level math. The undergraduate-style test has more downloads from the online AI hub Hugging Face than any such dataset.

At the time of those papers, AI was giving almost random answers to questions on the exams. "They're now crushed," Hendrycks told Reuters.

As one example, the Claude models from the AI lab Anthropic have gone from scoring about 77% on the undergraduate-level test in 2023, to nearly 89% a year later, according to a prominent capabilities leaderboard.

These common benchmarks have less meaning as a result.

AI has appeared to score poorly on lesser-used tests involving plan formulation and visual pattern-recognition puzzles, according to Stanford University’s AI Index Report from April. OpenAI o1 scored around 21% on one version of the pattern-recognition ARC-AGI test, for instance, the ARC organizers said on Friday.

Some AI researchers argue that results like this show planning and abstract reasoning to be better measures of intelligence, though Hendrycks said the visual aspect of ARC makes it less suited to assessing language models. "Humanity’s Last Exam" will require abstract reasoning, he said.

Answers from common benchmarks may also have ended up in data used to train AI systems, industry observers have said. Hendrycks said some questions on "Humanity's Last Exam" will remain private to make sure AI systems' answers are not from memorization.

The exam will include at least 1,000 crowd-sourced questions due November 1 that are hard for non-experts to answer. These will undergo peer review, with winning submissions offered co-authorship and up to $5,000 prizes sponsored by Scale AI.

"We desperately need harder tests for expert-level models to measure the rapid progress of AI," said Alexandr Wang, Scale's CEO.

One restriction: the organizers want no questions about weapons, which some say would be too dangerous for AI to study.