Through math, not pixels. For Everything an agent needs to reason and act.

Layout is pure mathematics. Painting requires a GPU. Hollow keeps the layout step and drops everything after it.
Screenshots were designed for humans. DOM dumps were designed for humans. Hollow answers what an agent actually needs: what is on this page, where is it, what can I do with it?
A function that runs, calculates, and terminates. No 300MB binary, no 2–4 second cold starts, no persistent server.
HOLLOW is a Smart Virtual Browser Engine. Works across SPAs and other complex sites for AI agents to efficiently complete a task.
The agent reads. Decides. Acts. The function terminates. Nothing idles. Billing stops the moment the session ends.
A standard Next.js deploy. No complex infrastructure. No container orchestration. No browser binaries to manage.
Hollow uses Happy DOM to execute JavaScript and build the DOM tree, then Yoga — the same engine that powers React Native — to calculate exact coordinates. The GDG map is the output: element IDs, positions, relationships.
Hollow intelligently analyzes each site and selects the optimal extraction strategy. No configuration needed. No manual tuning. Just a URL in, structured data out—covering ~90–95% of agent tasks across SPAs, content-heavy pages, and native apps.
The Matrix Mirror tags every element and makes it addressable. The SSE log shows every routing decision, confidence score, and action as it happens. Scan the QR code, paste the system prompt into Claude or GPT. Working browser in 30 seconds.
[Viewport: 1280x800]
[nav: flex-row y:0 h:44]
[1] a "Home" x:0 w:80 h:44
[2] a "Login" x:80 w:80 h:44
[main: flex-col y:44]
[3] input:email x:40 y:84 w:400 h:44
[4] button "Submit" x:40 y:140 w:400 h:48
[5] div.card x:40 y:200 w:400 h:200
[6] h2 "Title" x:56 y:216 w:368
[7] p "Body text" x:56 y:260 w:368Structured spatial trees with element IDs, positions, and relationships. Everything an agent needs to reason and act. Nothing it doesn't.
We're building web perception for AI agents. Not a faster Puppeteer. Not a headless browser. Not a lighter Browser. Something that fits the tooling reality of how agents actually run.
Hollow slots into any agent loop with two API calls. Perceive returns the spatial map. Act takes an element ID. The loop runs until done.
No binary to install. No Docker. No Chromium. A standard Next.js deploy on Vercel. Serverless pricing from the first request.
git clone github.com/Badgerion/hollow
cd hollow && npm install
# .env
UPSTASH_REDIS_REST_URL=...
UPSTASH_REDIS_REST_TOKEN=...
vercel deploy/api/perceive{ "url": "https://..." }
→ { sessionId, gdgMap, confidence, tier }/api/act{ sessionId, action: { type, elementId } }
→ { sessionId, gdgMap, confidence, tier }Paste the system prompt into Claude, GPT, or Gemini. Working browser in 10 seconds. No API key. No setup.