I call it the Context Gap — the asymmetry between people who have AI memory infrastructure and people who don't. And it's about to get a lot worse.
I spend more time wiring up my AI's memory than most people spend on the actual deliverable.
That sounds insane. But it's the thing that actually makes AI useful.
I run projects with moving parts — budgets that shift, timelines that change, technical details that evolve week to week. The kind of work where last Tuesday's decision affects next Friday's deliverable, and nobody has time to re-read every email thread to piece the story together.
So I've built context systems. Not the sexy kind of AI work — not agents doing backflips or chatbots having existential crises. The plumbing. Memory infrastructure that lets my AI retrieve the right information, understand how a project has developed over time, and draft responses I can trust in under a minute.
It's tedious. It's unglamorous. And it might be the most important thing I do.
The Moment It Clicked
I send a clear, concise project update. The tracking is working — here's the data, here are the numbers, here's what changed and why. Every question the other side might have is pre-empted. The email is structured, complete, and self-contained.
The reply comes back within hours: "Just to clarify — which project is this referring to? Can you confirm the budget? What was that URL again?"
Every answer is in the email they're replying to. Or in the report I sent last week. Or in the thread three messages below their cursor.
That's when I realised — the bottleneck was never my AI. It was theirs. Or more accurately, the complete absence of one.
The Context Gap
I've started calling this the Context Gap — the asymmetry between people who have AI memory infrastructure and people who don't.
Yes, I have tools most people don't. And if I'm honest, that's partly on me.
I've been building retrieval systems so that when an email lands, my AI already knows the full context — every budget change, every timeline shift, every decision and its rationale. I can draft a response in minutes that would otherwise take thirty.
But here's the thing: if I know my collaborator doesn't have that infrastructure, and I'm still frustrated when they can't match my context — that's me failing to design for the system I'm actually in.
The Context Gap isn't just their problem to solve. It's mine too.
Ethan Mollick has written about how AI creates "jagged frontiers" — uneven capability gains that leave some tasks transformed while others stay stubbornly human. Collaboration is one of those jagged edges. My AI made me faster. It didn't make the handoff faster. That's where the system breaks.
What Actually Helps
I've been thinking about what I wish I'd known earlier — and what I'd tell collaborators who don't have time to build their own AI memory systems.
If you're building context systems:
- Design for the lowest-context person in the chain, not yourself. Your AI knows the history. They don't. Write like they're starting fresh.
- Front-load the "why" and the "what changed." Bury the detail. Most people read the first two lines and skim the rest.
- When you get a clarifying question that frustrates you, treat it as a signal your communication didn't land — not that they're lazy.
If you're on the other side:
- Before you send a "quick clarification," scan the email one more time. Ctrl+F the keyword. It might already be there.
- If you're genuinely overwhelmed, say that. "I don't have the bandwidth to dig into this — can you give me the one-line version?" is more honest than asking questions you could answer yourself.
- Consider building even a lightweight version of this. A shared doc per project. A thread you actually update. A note in your tool of choice that captures the last known state. It doesn't have to be AI. It just has to exist.
Why This Gets Worse Before It Gets Better
Here's the part most people aren't thinking about yet: the Context Gap is about to widen — fast.
Right now, the asymmetry is between people with AI infrastructure and people without. But within a year or two, agents will be drafting on both sides of the conversation. You'll have AI-generated emails responding to AI-generated emails, with humans vaguely supervising handoffs they don't fully understand.
When that happens, the ability to bridge context gaps — to design communications that survive multiple AI-to-AI relays and still make sense to the human at the end — becomes a genuine skill. The people who learn to do that now, while the gap is still manageable, will have a compounding advantage over those who keep optimising for their own speed.
Stuart Butterfield, when he launched Slack, told his team: "We are not selling saddles here. We are changing how people spend their time." The same applies to AI collaboration infrastructure. It's not about the tool. It's about the workflow it makes possible — or impossible — for the people around you.
The Bigger Point
We're all focused on making our own AI systems better — bigger context windows, better retrieval, smarter agents. But collaboration is a two-player game. It doesn't matter how sharp your context infrastructure is if the other side of the conversation has none.
Your AI setup is only as effective as the weakest context system in the chain.
That's not a reason to judge anyone. It's a reason to bridge the gap — from both sides.
The people building real context systems are gaining an advantage. But the ones who'll actually benefit from it are the ones who use that advantage to make collaboration easier for everyone, not just faster for themselves.
I'm still figuring this out. But I'm pretty sure the answer isn't "everyone else needs to catch up." It's "how do I build systems that work even when they can't?"
Sent with love from London
Tell Your Friends