The browser wars are back — only this time, the battleground isn’t tabs or load times. It’s intelligence.

A new wave of AI-powered browsers promises to transform how we interact with the web, turning passive pages into active assistants that summarize, search, automate, and act on your behalf. But while the tech may feel novel, the business model behind it isn’t. These browsers don’t just offer smarter tools — they risk ushering in a new era of data extraction, baked into the very architecture of how we browse.

On 9 July, Perplexity launched Comet — a slick new browser that promises to revolutionize how we interact with the web using embedded AI assistants. Soon, OpenAI is expected to follow, reportedly designing its browser to take on Google Chrome by baking agentic AI models directly into the browsing experience. These models won’t just help you navigate the web — they’ll act on your behalf, making decisions, summarizing content, and even initiating tasks.

For all the talk of innovation, though, there’s an uncomfortable sense of déjà vu. Because while the front end may be changing, the business model behind it all feels eerily familiar — surveillance, packaged as convenience.

Privacy failures

We’ve been down this road before. For anyone who remembers Cambridge Analytica or Google’s $5 billion Incognito tracking lawsuit, the idea that tech companies might exploit user data in the name of progress shouldn’t come as a surprise.

What is surprising, though, is how quickly AI companies are embracing the very same privacy failures that landed their Web 2.0 predecessors in hot water. Comet, for instance, reportedly tracks “everything users do online” to build hyper-personalized ad profiles — a move straight out of the early-2000s Google playbook.

But this isn’t just a repeat of the past. The stakes are much higher now. AI systems don’t simply store information — they learn from it. They don’t just record your browsing history — they analyze it, infer your intent, predict your preferences, and adapt to your behavior. This isn’t passive tracking. It’s predictive, persuasive, and increasingly invisible.

Invisibility

And that invisibility is part of the problem. When a browser starts finishing your sentences, anticipating your questions, and helping with your emails, it feels like magic. But behind that seamless experience is a complex black box trained on your digital life. And unlike cookies or ad IDs, this kind of data isn’t easily wiped.

Once an AI model ingests your personal information, there’s no reliable way to make it forget. What goes in becomes part of the model’s DNA — shaping its outputs long after you’ve closed the tab.

Some argue that users understand this trade-off — that people are willingly giving up privacy for smarter tools. But let’s not pretend “I agree” on a 12,000-word terms of service means informed consent.

Most users don’t know what they’re giving away, let alone how it might be used months or years down the line. We’ve normalized this kind of ambient data collection to the point that it barely registers as a privacy issue anymore. That doesn’t make it harmless. It just makes it harder to spot.

Building user trust

As the founder of Aloha Browser, I’ve spent years watching the industry flirt with these trade-offs. I understand the temptation to lean into data-driven personalization. But I also know that building user trust requires restraint, not reach. Respecting people’s boundaries shouldn’t be considered radical — it should be the baseline.

The urgency of this moment isn’t just technical — it’s also regulatory. Earlier this month, the European Commission released a voluntary Code of Practice for general-purpose AI models, marking the first major milestone in the rollout of the EU’s AI Act.

Full compliance will become mandatory by August 2026, but these early guidelines already signal the direction of travel — transparency, documentation, and accountability. Europe now has the chance to lead by example — to show that it’s possible to build transformative AI products without reverting to the surveillance capitalism model that defined the last digital era.

Invisible surveillance

But regulation moves slowly, and the industry doesn’t wait. The AI browsers launching now will set precedents — technical, legal, and cultural — that could shape the next decade of digital life.

If we let these tools define “normal” before the rules catch up, we may find ourselves trapped in an architecture of invisible surveillance that’s far more entrenched than anything we faced in Web 2.0. If we’re not careful, today’s AI browsers could usher in a form of surveillance even more pervasive — and less visible — than anything we saw with Cambridge Analytica.

We don’t have to accept that outcome. If we don’t fight for privacy now, we’ll lose it — not with a bang, but with an instant, frictionless click.

We’ve featured the best business VPN.

This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro