Browser security in the age of AI agents

Pierre Tachoire

Pierre Tachoire

CTO

Katie Hallett

Katie Hallett

COO

Browser security in the age of AI agents

Last week, Brave exposed security risks in Comet, Perplexity’s AI browser.

TLDR: giving an LLM direct access to a user-facing browser is dangerous. The model can see everything you see and take actions on your behalf. It can be tricked by malicious content, leak information and take unexpected actions.

This is not a thought experiment, it’s happening.

Brave’s approach allows the LLM to access data across tabs, creating a high risk scenario. Defenses like in-model guardrails can reduce risk but cannot guarantee safety, because a single malicious prompt could bypass them.

As observers noted in the Hackernews thread, this is not like a human making a mistake. LLMs can be attacked relentlessly, and their design fundamentally allows unpredictable execution of input. Limiting permissions or approved actions helps, but it cannot fully eliminate the possibility of catastrophic breaches, making this approach inherently unsafe.

The lethal trifecta

Simon Willison’s lethal trifecta explains that the second an LLM system has 1) exposure to untrusted content, 2) access to private data, and 3) the ability to communicate externally, it can become malicious.

There is a structural problem when you apply this in the context of browser agents.

  1. Exposure to untrusted content: give a model access to the web and you are exposed
  2. Access to private data: let your agent interact with your sessions and you are exposed
  3. The ability to communicate externally: let your agent take actions on the web and you are exposed

The reality is that the more capable your AI agent is, the more dangerous it becomes if it touches the web directly.

Isolating the browser is not enough

All the current approaches, whether user facing or headless, are forks of the Chromium project, designed for human users.

You can give an agent a dedicated browser instance in a sandbox, which reduces risk, but it does not remove it. Using Chromium forces you to implement restrictions around the browser: limiting what it can do, controlling what it can access, and monitoring its actions. Every feature, extension, and hidden API in the Chromium stack increases the attack surface, making it fundamentally harder to provide safety guarantees.

By contrast, building a browser from scratch allows you to enforce these constraints inside the browser itself. You can design it to only do what you authorize, limit exposure to untrusted content, and tightly control access to private data.

How do you give an LLM a browser without giving it everything?

Cut any leg of the lethal trifecta and you remove the risk.

The answer Simon Willison proposes is to split tasks across multiple LLMs. An isolated LLM handles individual tasks, and a main model orchestrates them without seeing external content. In the middle sits a controller without LLM reasoning that manages data safely.

We believe you can integrate controls into the browser itself. The browser should only do what you authorize, limiting exposure to content, the ability to act outside itself and access to private data.

That is why we built a lightweight browser from the ground up. It is fast, and crucially, it is small. Small means it can run locally, close to the agent, and only when needed. You do not have to trust a cloud provider or run a full desktop browser.

We’re building Lightpanda for AI agents first, not retrofitted from a traditional browser. A lightweight browser with instant startup changes the way you can use it.

Instead of having multiple tabs in Chrome, you can start an instance of Lightpanda per task and reduce its privileges to your immediate needs: read only on this domain for the first step, act only for the second.

Our vision: one browser, one task

Lightpanda aims to provide the most secure foundation possible by letting agents operate closer to the machine. That way, agents can be sandboxed per task and scale safely in the cloud or on a workstation without exposing your data or machine.

Our vision is simple: one browser, one task. Each agent has its own browsers, and each browser instance handles only a single task. Browsers are isolated to the minimal scope they need to complete their work.

There is no single silver bullet for securing agents. Designing the browser from the ground up is a step that makes that goal more achievable.