Skip to content
Menu
Menu

Data transparency needed to foster support for self-driving cars 

Industry safety disclosures have been inconsistent, noted Cambridge academic Andrew Blake, speaking by video link at a University of Macau symposium
  • The symposium was held to celebrate the 20th anniversary of the school’s partnership with the Cambridge Clare Hall Visiting Fellowship Programme

ARTICLE BY

PUBLISHED

ARTICLE BY

PUBLISHED

UPDATED: 03 Apr 2025, 12:07 pm

For global policymakers to feel better assured about artificial intelligence (AI) in self-driving cars, the companies developing the technology should be more open about how they measure safety standards and report their findings, says Andrew Blake, honorary professor of machine intelligence at Clare Hall, University of Cambridge. 

Speaking virtually at the inaugural Clare Hall, University of Cambridge – University of Macau Forum on sustainable development at the Zhuhai campus on Monday morning, Blake noted that the obstacles to closing the AI safety gap for autonomous driving vehicles, in a bid to enable them to become mainstream, are proving to be more difficult than what industry experts had anticipated a few years ago. 

Besides underlying technological hurdles, industry-wide safety disclosures have been inconsistent, with much of that information not made publicly available, he said. This has become a problem for regulators formulating safety measures, which have simultaneously attracted criticism for the policies that are eventually implemented. 

[See more: Self-driving cars can now cruise all 300+ kilometres of Hengqin’s roads]

Among the challenges includes reporting disengagement rates, which are typically defined as instances when an autonomous vehicle either cannot determine the appropriate action, which then hands over control to a safety driver, or when the safety driver takes control of the vehicle.

Experts have noted that those numbers can be skewed and often misleading, since evaluations can occur on motorways with fewer obstacles, creating inconsistent testing environments. Blake highlighted the range of disengagement rates for various self-driving vehicles in his presentation, adding that companies hold significant discretion about when to disengage.

Knowing how the data is collected would go a long way to building greater trust in technology, notably machine learning, Blake argues. Underscoring the importance of safety, Blake exemplified Netflix which boasts a 90 percent success rate when suggesting entertainment options based on a user’s watch history, meaning one mistake for every ten decisions. 

“It is not that impressive of a figure, but no one dies for choosing the wrong movie,” Blake said. 

[See more: No longer confined to science fiction, flying cars will be mass produced from next year

But when it comes to facial recognition, AI’s success rate inches closer at 99.9%. While it may appear close to perfect, the percentage rate represents one mistake for every thousand decisions, accumulating a litany of errors when considering the millions of decisions made over time. 

Ideally, technology needs to achieve an accuracy rate of 99.9999999%, with seven 9’s representing a single mistake for every hundred million decisions. As cities urbanise, concentrated buildings could interfere with signal accuracy, coinciding when the number of vehicles is expected to increase, not decrease, in the coming years, according to Blake, adding that motor vehicle accidents account for a third of preventative accidents worldwide.

His presentation entitled “Safe AI driving in Smart Cities?” was well timed, as Hengqin is developing new regulations to stimulate local smart transport and industry growth. Last year, Hengqin opened additional roads for testing self-driving cars. The move provides over 300 kilometres of roadways to engage with diverse traffic conditions to improve the technology, a move officials hope will attract firms and research institutions specialising in self-driving vehicles.

Scholarly exchanges

Blake’s presentation occurred during the forum’s second panel which focused on AI. The symposium theme of “Interdisciplinary Approaches to Advancing Sustainable Development: Innovative Solutions to Global Challenges,” marked the 20th anniversary of the Cambridge Clare Hall Visiting Fellowship Programme with UM, which has allowed 23 UM scholars to participate in an academic collaboration at Cambridge University in England. 

The two universities renewed their cooperation agreement, committing to promote joint research projects and enhance talent development. They emphasised the forum’s role in bridging academic dialogue and the importance of integrating diverse perspectives in scientific research.

UPDATED: 03 Apr 2025, 12:07 pm

Send this to a friend