Mosaic
AI-native interactive persona platform. Instead of reading a document about someone, visitors have a conversation with their AI persona — in first person, in their voice. Same data serves four surfaces: conversation, visual display, external AI tools via MCP, and agent-first web discoverability.

The Problem
A resume is a static document. It's one-dimensional — what you see is what you get. The reader parses it, forms an impression, and moves on. There's no interaction, no depth, no way to ask follow-up questions. And for the person it represents, it's a maintenance burden — a document that's always slightly out of date, always slightly incomplete.
We wanted a better model. Not just a better resume, but a better way to represent professional identity — one that's interactive, always current, and goes as deep as the conversation demands.
Instead of reading a document about someone, you have a conversation with their AI persona — in first person, in their voice.
What We Built
Mosaic is an AI-native platform where visitors interact with persona data through conversation. Ask questions, explore topics, go deep on whatever interests you. The agent responds naturally — not as a chatbot answering from a script, but as a reasoning system that understands the person it represents.
Two interfaces serve the same data. The web app blends conversation with a visual surface — structured content that progressively reveals as the conversation touches relevant topics. For users who prefer their own AI tools, an MCP server exposes persona data via the Model Context Protocol — Bring Your Own LLM.
The web app has two modes, driven by intent. Visitors arrive and start talking — the agent speaks as the person, in first person. No login prompt, no gate. When a visitor expresses intent to edit, authentication follows naturally. The same agent shifts voice — it becomes a collaborative helper with full editing capabilities. No mode switch, no separate admin panel. Intent determines voice, capability, and purpose — with authentication as the mechanism that enables certain intents.
Intent determines the agent's identity, voice, and capabilities. Same data, same codebase — different purpose.
Web App

Intent-driven authentication
When a visitor expresses intent to edit, authentication surfaces naturally within the conversation — no separate login page, no mode switch.

Agent-as-CMS
Once authenticated, the agent becomes a collaborative editor — analysing documents, proposing content changes, and managing the persona through conversation.
The Thinking
Several architectural decisions shaped the project — not just how to build it, but how to think about AI-native products.
Content Engineering
Mosaic stores persona content as freeform markdown, not rigid database fields. An agent curates the content at write time — reasoning about structure, emphasis, and what matters. Every subsequent interaction benefits from that upfront curation.
We call this WAAG — Write-time Agent-Augmented Generation. It's the alternative to RAG when your content is curated and bounded. Instead of retrieving fragments from a vector database at query time, the agent reasons over the full content once, at write time. The result: richer context, more coherent responses, and no retrieval latency.
Model selection matters here. Write-time curation uses Opus — the task demands deep reasoning about structure, emphasis, and voice. Real-time conversation uses Sonnet — balancing capability with the latency and cost requirements of interactive use. Pre-processing tasks like distilling raw data before it enters the main context use Haiku — fast and cost-efficient where deep reasoning isn't needed. Matching the model to the cognitive demand of the task is part of the architecture, not an afterthought.
Principles Over Scripts
Early versions used explicit guardrails: DO this, DON'T do that. The evolution was toward principle-based prompting — giving the agent context and principles to reason from, not scripts to follow. Principles preserve autonomy and handle novel situations that rigid rules can't anticipate. The agent should think, not execute.
Trust at the Data Layer
You can constrain an agent's behaviour through prompts, but you can't control the authenticity of the underlying data. Trust doesn't come from prompt engineering — it comes from verifiable data sources. Mosaic is designed as a foundation for future verification: GitHub contributions, LinkedIn profiles, credentials. The architecture assumes trust will be earned at the data layer, not enforced at the prompt layer.
Give agents principles and authentic data, not scripts and guardrails.
Four Surfaces, One Data Model
One of Mosaic's most interesting properties: the same persona data serves four different surfaces, each optimised for its consumer.
Conversation — visitors chat with the persona through the web app. The agent reasons over the full content to respond naturally, in first person.
Visual — a structured display surface that progressively reveals curated content sections as conversation touches relevant topics. The agent selects what to show; the content was pre-distilled at edit time. The conversation builds the mosaic.
Connected — external AI tools connect via MCP (Model Context Protocol). The visitor's own LLM does the reasoning. This is Bring Your Own LLM — your model, your conversation, Mosaic's data. When you don't control the model, data quality and tool design become your only control surfaces.
Discovery — an agent-first web layer makes personas findable by AI agents searching the web. Content negotiation serves curated markdown to agents and interactive HTML to humans from the same URL. The same data that powers conversation also powers discoverability — no conversion layer, no retrofit.
The architectural insight: content engineering at write time serves all four surfaces. Structure the data once, serve it appropriately for each consumer.
When you give up control of the LLM, data quality becomes your only control surface.
MCP — Connected Surface

Persona discovery via MCP
External AI tools query Mosaic data through the Model Context Protocol. The visitor's own LLM reasons over the results — Bring Your Own LLM.

Owner editing via MCP
Owners manage their persona through any MCP-compatible tool. Same authenticated capabilities, different interface — the agent-as-CMS pattern extends beyond the web app.
What Mosaic Demonstrated
Mosaic is a running product — not a demo, not a proof of concept that lives on localhost. The patterns it established — first-person voice, content engineering, principle-based prompting, the four-surface architecture — directly informed the site you're reading now.
The Gramercy site is built on the same architectural thinking: content engineered for AI consumption, an agent that speaks as the entity, authentication-driven capabilities, and content that serves every surface from a single source.
Want to experience it? Talk to the founder's AI persona on Mosaic, or connect your own AI tool via MCP.
The thinking that built Mosaic is the thinking behind this site. The medium is the message.