Skip to content
Menu
Menu

New AI-powered browsers create ‘cybersecurity time bomb,’ experts say

ChatGPT Atlas and Microsoft’s Copilot Mode offer hands-off convenience, letting AI agents handle repetitive tasks on your behalf
  • But cybersecurity experts warn these browsers are riddled with vulnerabilities that expose users to data theft, malware and prompt injection attacks

ARTICLE BY

PUBLISHED

ARTICLE BY

PUBLISHED

While a wave of AI-powered browsers promise users an optimised internet experience, researchers warn of a cybersecurity time bomb, reports The Verge.

Late October saw two heavy hitters enter the AI browser space, with the launch of OpenAI’s ChatGPT Atlas and Microsoft’s Copilot Mode for Edge. By putting AI at the centre of the browser itself, companies aim to deliver an intelligent assistant that can answer questions, summarise pages and even perform actions on your behalf. That ability allows users to delegate repetitive tasks, freeing up time for other work, but it comes at a cost. Cybersecurity experts told The Verge that this hands-off convenience opens up users to a minefield of new vulnerabilities and data leaks – some of which are already being exploited.

Within the last few weeks, researchers have revealed serious vulnerabilities with the newly released Atlas that allow attackers to exploit the AI’s “memory” to inject malicious code, gaining access privileges or deploying malware – and that’s just the tip of the iceberg.

[See more: Identifying the AI bubble: Are we there yet?]

The memory function also poses a more insidious risk, allowing AI browsers to collect far more information on users than traditional browsers as it learns from everything you do or share, as well as conversations with the built-in AI assistant. That level of information, coupled with stored credit card details and login credentials, make AI browsers a tempting target for hackers.

Agentic AI browsers are even more vulnerable to data leakage, discarding the strict separation that’s long protected users. A single prompt injection – malicious instructions embedded in content the AI reads – could give attackers access to login credentials, personal information, emails, calendars and more. No malware needed – the AI steals it for them.

Even the makers of agentic AI browsers acknowledge that prompt injections are a major threat. Both OpenAI CISO Dane Stucky and Perplexity, which rolled out its free AI-powered web browser Comet last month, describe them as a “frontier” problem that has no firm solution.

Rolling out new technology always carries risk, but one expert told The Verge that in their rush to market, companies failed to test AI browsers as thoroughly as they should. “Browser vendors have a lot of work to do in order to make them more safe, secure, and private for the end users,” said Yash Vekaria, a computer science researcher at UC Davis.

Send this to a friend