The short version
Most AI agent memory today is a vector store: text chunked, embedded, and retrieved by keyword similarity. That works for facts. It breaks the moment you need to remember a relationship — who told you, when, how much you trust them, what they still owe you.
Here is a different primitive: the contact graph. Memory modeled as a social network instead of a vector store. Every agent you have ever talked to is a node. Every meaningful exchange is an edge. Facts attach to edges, not to orphan chunks of text.
Three things that need separate plumbing today fall out of the shape for free: provenance, trust, and staleness. This is really a context-engineering problem dressed up as a storage problem — you cannot ship a good context strategy on top of the wrong memory primitive.
The problem: flat memory loses the edges
Picture an agent that has worked with three others over the past week. A calendar agent handled some scheduling. A research agent pulled a few reports. A customer-facing agent forwarded an inbound request. All three conversations went into memory as chunks, got embedded, and got stored.
On Friday you ask: "did anyone mention the Q2 budget?"
The vector store returns three matches with near-identical similarity scores, because "Q2 budget" appears in all three. One chunk is the calendar agent stating a figure. One is the research agent speculating. One is the customer agent asking a question. Retrieval ranked them as interchangeable — they are not. One was a statement, one was a guess, one was a question, and they came from sources with different reliability histories.
This is not a retrieval bug. The bug is upstream. We recorded the text and threw away the edges around it: who said it, when, why, and what obligations attached to it. Embeddings compress meaning. They do not compress relationships.
The fix: model memory as a social network
A contact graph treats every agent your agent has talked to as a node, and every meaningful exchange as an edge.
The node carries what that other agent is: identity, capabilities, claimed knowledge, trust score. The edge carries what happened between you: what was said, what was promised, what was delivered.
Facts do not live in a separate store. They live on edges, attached to the contact who supplied them. A claim is always a claim by someone, made at a specific time. Remove the contact and the claims go with them. Lose trust in the contact and every future fact from them carries a lower weight.
Querying changes shape too. Instead of "find me relevant chunks", the agent asks "what did Ada tell me about X last week?" or "who are the two contacts most likely to know Y?". Those are graph traversals, not similarity searches.
The underlying data structure is not new. Knowledge graphs and semantic networks have long treated relationships as first-class. What is underexplored is applying the same shape to agent-to-agent memory at a time when agents talk to other agents constantly and cannot afford to treat every utterance as a flat document.
What the schema looks like
A contact node carries:
- a stable identifier (a DID, a public key, something that is not a display name)
- declared capabilities (what the agent says it can do)
- observed capabilities (what you have actually seen it do)
- a trust score, local to you, updated per interaction
- a freshness policy (how long claims from this contact stay live before re-confirmation)
An interaction edge carries:
- a timestamp
- a summary of what was exchanged
- specific claims made, each tagged with its own confidence
- obligations created (you owe them something, they owe you something)
- the outcome, if it is known yet
Rendered as a schema, the shape is small:
interface ContactNode {
id: string; // stable identifier: DID, public key
declaredCapabilities: string[]; // what the agent claims it can do
observedCapabilities: string[]; // what you have actually seen it do
trustScore: number; // local to you, updated per interaction
freshnessPolicy: FreshnessPolicy; // how long claims from this contact stay live
}
interface InteractionEdge {
from: string; // contact id of the speaker
to: string; // contact id of the listener
timestamp: number;
summary: string;
claims: Claim[]; // each with its own confidence
obligations: Obligation[]; // who owes whom, what, and by when
outcome: "pending" | "fulfilled" | "broken";
}Real systems add revocation, delegation, and group memberships. The five and five above are enough to start doing real work.
Three questions a contact graph can answer (that a vector store can't)
Provenance. Who told me this, and when? In a vector store, the answer is "a chunk" and maybe a document title. In a contact graph, the answer is a specific contact, at a specific time, with a specific confidence. Every fact in working memory can be traced back to the edge it rode in on.
Contradiction. Has anyone contradicted themselves, or contradicted someone else? Flat memory cannot easily answer this — conflicting statements sit as unrelated chunks. A contact graph stores them on the same edge or on edges to the same node, and inconsistency becomes a structural property of the graph rather than a semantic subtlety you have to reason about.
Accountability. What did I promise the calendar agent on Tuesday, and did I keep it? Obligations live on edges. You can walk them, mark them open or closed. You cannot do any of this when an exchange is stored as free text.
These are not edge cases. They are the questions any serious multi-agent system has to answer before it can be trusted to take actions on anyone's behalf.
A worked example: two voice agents meet
A caller reaches an inbound voice agent at a dental practice. Two minutes in, they ask a billing question the dental agent cannot handle, so it hands the call to a billing agent on the same network.
In a vector-store world, each agent keeps its own pile of text. Next week, when the same caller returns, neither agent knows they have met this person before.
In a contact-graph world, the dental agent opens a contact with the billing agent and writes an edge: "handoff, caller asked about invoice 4821, transferred at 10:42". After the call closes, the billing agent writes back a return edge: "handled, caller opened a dispute, unresolved". Both agents now know about each other, and about the caller through each other.
When the caller returns, the dental agent does not start from zero. It walks the edges. It asks the billing agent: "have you seen this caller recently, did we leave anything open?". The billing agent answers from its side of the same edge. Context arrives not as a retrieval from a pile of text, but as a structured answer from a known source.
None of this requires a new model. It requires a schema and a discipline about where facts live.
Three properties you get for free
Most agent systems hack these in separately. A contact graph hands all three to you as a side effect of the shape.
Provenance. Every fact is attached to the edge that produced it. There are no orphan truths in a contact graph. Remove a contact and every claim they made goes with them, automatically. You do not need a provenance subsystem glued on top.
Staleness. Freshness policies live per contact. Things the calendar agent said three hours ago are not equivalent to things a research agent said three days ago, even if the embeddings happen to match. A contact graph lets confidence decay along the edge based on the contact's own freshness rules.
Trust. Reliability is a property of the relationship, not the chunk. When a contact turns out to have been wrong about something, you do not down-weight a fragment of text. You update the contact, and every future claim from them carries the correction automatically. This is how humans handle unreliable sources, and it has not yet made it into how agents handle them.
Where this breaks
Solo agents with no peers have no graph. The primitive is pointless for a single-agent workflow that never talks to anyone else.
Identity is the next hard thing. A contact graph is only as useful as the stability of its node identifiers. If Ada is a public key, fine. If Ada is a display name, the graph will rot within a week as agents churn. This is a solved problem in principle (DIDs, verifiable credentials, capability tokens) and unsolved in practice for most agent frameworks today.
Graph bloat is real. A busy agent could accumulate thousands of contacts, most of them low-signal. Pruning, archiving, and a casual-versus-durable distinction all need design work.
Merging and conflict resolution is probably the deepest unsolved piece. The graph lets you represent disagreement between contacts naturally. It does not tell you how to act when two contacts disagree on something you now need to decide on. That is a reasoning problem, not a storage one, and it does not go away just because the shape of memory is nicer.
The contact graph does not solve reasoning. It just gives reasoning honest inputs — memory that knows who said what, when, and with what weight. Whatever decision procedure comes next, it has a better substrate to run on.