Browser-native agent workspace

The browser is your agent's full workspace.

Memorall turns your browser into a local-first agent workspace with memory, tools, and model choice.

Memory you control Browser + files + sandbox tools Runs local-first Local and remote model support
Workspace: Daily Browser Agent Local-first mode active
browser-native agent 14 sources linked
Use the current tabs, docs, and notes to turn this browser work into a tighter product brief.
I pulled the active pages, linked the workspace files, attached prior topic memory, and queued the missing PDF pages for conversion.
Draft is ready. I can keep working from the same browser session, inspect the live page, edit files, or use the sandbox to prototype the next step.
Control Memory stays tied to your browser work, files, and topic context instead of disappearing into prompts.
Action Browser session, document FS, and sandbox runtime stay available to the same agent.
Knowledge graph
Enabled tools
Browser session dom
Workspace FS rw
Node sandbox npm
Flow builder graph
The browser is enough for a real agent
Browser memory you control
One browser agent for daily work
Browser + files + sandbox tools
Offline-first local runtime
Browser-local LLM support
Optional remote providers
Durable project memory
Sandbox for web apps and APIs
Configurable agent flows
The browser is enough for a real agent
Browser memory you control
One browser agent for daily work
Browser + files + sandbox tools
Offline-first local runtime
Browser-local LLM support
Optional remote providers
Durable project memory
Sandbox for web apps and APIs
Configurable agent flows

Your work lives in the browser. Your agents usually do not.

You already live in the browser for research, docs, products, dashboards, and daily execution, but your agent keeps getting split across separate apps, IDEs, webpages, and command windows. Memorall turns the browser into the one place your agent can actually live and do the daily job.

The browser is where you live, but your agent is not.

Daily work happens in the browser, but the agent, memory, and costs keep getting pushed into separate products instead of staying in the place you already use most.

Agents run everywhere except your main workspace Your agent ends up scattered across apps, IDEs, pages, and commands instead of staying in the browser where you already spend most of the day.
No memory for daily browser work Pages, selections, notes, screenshots, and research do not stay attached to the work, so the same context keeps getting lost and rebuilt.
You keep paying for fragmented add-ons Memory, subscriptions, and extra services stack up across separate tools instead of starting from a strong local browser setup you control.

One browser agent covers the daily work.

Memorall keeps one strong agent in the browser, so it can learn from the sources you already use, keep memory under your control, and act with tools from the same home base.

Remember everything in your browser Pages, selections, screenshots, PDFs, notes, and other browser sources stay attached to your work and controlled by you.
Run a strong agent fully inside the browser It can use browser access, virtual filesystems, personal documents, sandbox execution, and configurable features from one place.
Keep the stack local until you choose otherwise Memories, files, and even browser-local LLMs can stay on your machine, with multiple providers available when you want them.

Memory that learns from the browser and stays under your control.

The first core feature is memory. Memorall captures what you already use in the browser, turns it into durable context, and keeps it ready for recall instead of letting it disappear across tabs and prompts.

01

Learn from browser sources

Capture visible pages, selected text, screenshots, PDFs, Markdown, notes, and other browser resources without breaking the way you already work.

Pages + selections Screenshots + files Browser resources
02

Turn saved material into memory

Turn captures, notes, documents, and conversations into reusable memory that stays attached to the work instead of dissolving after one session.

Topic memory Graph + retrieval Reusable recall
03

Recall the work later

Bring memory back when you need it, with sources, files, and relationships still connected so the agent can work from more than the last prompt.

Source-linked context Durable history Long-running work

See how raw browser work becomes durable memory.

Start with live browser work, then watch pages, docs, notes, and sheets turn into memory the agent can inspect, retrieve, and reuse later.

Browser sources Memory conversion Durable recall
Memorall / memory pipeline raw sources
feature: Browser memory stage 1 of 4
01

Capture what matters in the browser

Start with the real working set: webpages, PDFs, notes, spreadsheets, and snippets from the browser session you are already in.

02

Convert sources into memory

Those captures compress into graph-ready memory units. The visual shifts from raw sources to context the agent can retrieve, compare, and reference later.

03

Keep sources and relationships attached

Relationships start appearing between ideas, source files, pages, and prior work. Saved material stops being a pile of files and becomes usable memory.

04

Recall the work when it matters

The final view is durable memory the agent can inspect and use. Sources stay attached, so the work stays recoverable instead of disappearing into old sessions.

A strong agent that lives fully inside your browser.

The second core feature is the agent itself: a strong browser-native agent that can access the browser, virtual filesystems, personal documents, sandbox execution, and scalable feature layers for extension and customization.

Browser

Browser access built into the agent

Keep one agent inside the browser with live page access, DOM awareness, and the ability to work from the same pages where your daily work already happens.

Live pages Page-aware actions
Agent

Virtual filesystems and personal documents

Give the agent access to virtual filesystems, stored documents, workspace files, and your personal working material without moving the job into a separate product.

Virtual filesystem Personal documents
Sandbox

Sandbox for web apps and APIs

Let the agent use a browser-hosted sandbox to build web applications, run APIs, test code, and keep iterative work close to the same browser workspace.

Node sandbox Web app + API tasks
Extend

Extend over MCP and feature layers

Start with the built-in capabilities, then extend the agent through MCP-ready architecture and scalable feature layers instead of being locked to one fixed assistant shape.

MCP-ready Scalable features
Customize

Customize agents for different jobs

Shape agents around different workflows with configurable features and structures, so the product can support more than one canned assistant behavior.

Configurable agents Scalable structure
Share

Build once, share with others

The agent foundation is designed so your customized agent setups can be reused, evolved, and shared instead of staying trapped in a one-off session.

Reusable setups Shareable agents

Offline-first local work, fully under your control.

The third core feature is offline-first local operation. Memories, files, documents, editors, and even local LLM workflows can stay on your machine, so the core experience does not require anything extra.

01
Local memory

Keep memory on your machine

Save pages, captures, and memory locally so the working context stays under your control.

02
Local files

Work with files, docs, and editors locally

Keep PDFs, Markdown, spreadsheets, workspace files, and editable resources available in the same local environment.

03
Local runtime

Run local workflows without extra services

Let memory, files, and browser-hosted runtimes work together locally instead of depending on another required service.

04
Local control

Keep the whole stack under your control

Keep memories, documents, resources, and even local LLM execution closer to your machine and your decisions.

Multiple model providers, with strong browser-local LLMs.

The fourth core feature is model flexibility. Memorall supports multiple providers while staying especially strong for browser-local LLMs, so you can choose the runtime without changing the product workflow.

One browser workflow across local and remote model targets.

Prompts, memory, files, tools, and routing decisions pass through Memorall first. From there, the same browser workspace can send work to browser-local inference, local servers like Ollama or LM Studio, or remote providers like OpenAI and Anthropic when you choose them.

Memorall Model router and browser-native control plane
  • OpenAI remote API target
  • Anthropic remote API target
  • Azure / OpenRouter swappable external gateway
  • Ollama local server runtime
  • LM Studio local server runtime
  • WebLLM browser-local inference
  • Wllama browser-local inference
  • Transformers.js browser-local inference

Browser-local LLMs are first-class targets.

WebLLM, Wllama, and Transformers are treated as real runtime options, not as decorative fallbacks. You keep one workflow while choosing the local model path that fits the job.

Choose the runtime, keep the workflow.

WebLLM browser-local inference target
Wllama browser-local inference target
Transformers.js browser-local inference target
Ollama + LM Studio local server runtimes behind the same control surface
Remote APIs OpenAI, Anthropic, Azure, OpenRouter when explicitly enabled

Providers change, but the workflow stays the same.

The user-facing surface does not fragment just because the model target changes. Retrieval, document context, browser tools, workspace files, and agent logic still enter through the same product layer first.

Topic memory Browser tools Workspace files Agent flows

Control is built into the architecture.

Memorall is built so the core experience can stay local and user-controlled. Optional auth and external providers exist, but the product does not force them on every workflow.

No mandatory account

The core app can run locally in the browser without forcing sign-in or a hosted database.

Data stays local by default

Pages, graph data, files, and conversations live in browser storage backed by PGlite and local file surfaces.

You choose the model tradeoff

Use browser-hosted runtimes by default and connect external providers only when you want that tradeoff.

Memory stays attached to your work

Topics, files, graph nodes, and chats stay connected so the browser agent does not forget your work after one tab closes.

Make the browser the home base for your agent.

Memorall is for people who already work in tabs and want one browser agent for memory, files, tools, and models instead of scattered context across separate products.