Memory & Learning
How It Learns
Your agent isn’t a chatbot with amnesia. It remembers. Conversations, corrections, client context, the shape of your work. All of it, stored in an isolated graph database scoped to you. Nothing shared, nothing leaked, nothing lost between sessions.
What It Remembers
Persistent memory is the compounding layer. Every meaningful fact, preference, and correction gets written to its graph and surfaced when it’s relevant. Not jammed into a prompt window and hoped for.
Your clients and contacts
Names, relationships, history, preferences, the weird thing so-and-so hates. Ask about a client six months later. Your agent still knows.
Your corrections
Every time you fix it. Tone, format, a wrong fact, a better way. The correction sticks. It doesn’t make the same mistake twice, and it doesn’t need you to say it again.
Your workflows
The sequences you run, the tools you use, the order you do things. Your agent learns the shape of your work and anticipates what comes next.
Your preferences
Voice, structure, defaults, the way you like a report laid out. Your agent adapts to you, not the other way around.
What Stays Ephemeral
Not everything earns a permanent seat. Session-only context exists while you’re working on something and is released when the task closes. Keeps the graph signal-dense instead of a landfill of half-thoughts.
Current-task chatter
Back-and-forth while you’re working through one thing. Drafts, iterations, throwaway prompts. Kept for the session, then released.
Scratch work
Exploratory reasoning, partial outputs you discarded, the three ways it tried before the one you kept. Not stored long-term.
How to Teach It
Three mechanics, all compounding. Tell it explicitly what you want it to remember. Correct it inline the moment it’s wrong. Let feedback accumulate. The more you guide it, the sharper it gets.
Tell it a preference explicitly
Correct a mistake inline
Teach a workflow
Set a permanent rule
Receipts, Not Just Memory
For regulated industries. Mortgage broking, financial services, anyone under AML/CTF. Memory isn’t enough. You need receipts. Your agent holds a 7-year audit trail of every interaction, decision, and action, aligned to AML/CTF s.116(3) record-keeping obligations.
What the audit trail captures
- Every conversation, timestamped and attributable.
- Every action taken on your behalf. Emails sent, files read, decisions made.
- Every correction, override, and guardrail change.
- Exportable on demand for regulator requests, audits, or internal review.
Memory Under Isolation
Memory is scoped per-tenant and per-user. Your graph lives in an isolated database that no other tenant, no other user on the platform, and no shared process can read from. There is no cross-tenant pooling, no model-level leakage, no "accidentally trained on your data" surprise.
Per-tenant boundary
Your organisation’s graph is physically isolated. Another tenant’s agent cannot query it, full stop.
Per-user scoping
Inside your tenant, each user’s private context is scoped to them. Shared team context is explicit, not accidental.
No training-loop leakage
Your memory is not used to train foundation models. It’s yours, it stays yours, and it doesn’t end up in anyone else’s weights.
Details on the isolation architecture live in Security & Privacy.
Month One vs. Month Six
The value curve isn’t linear. It compounds. A fresh agent is capable. A six-month agent is yours, in a way no generic model ever will be.
Month One
Capable
It does what you ask well. You’re explicit about context. Who, what, how. Every time. Good, useful, still generic.
Month Six
Yours
Names land without introduction. Tone is already right. It flags the thing you’d have flagged. The work it returns looks like the work you would have done. Only faster, and without you having to do it.
The compound rule
Every correction you give today is a correction you never have to give again. Every preference you set compounds into every future output. Memory is the reason month six doesn’t look like month one.