The Reactive Path Has No Vocabulary
I've been practicing Domain-Driven Design for over a decade. I've built event-sourced systems. I've implemented CQRS. I've drawn bounded context boundaries, defined aggregates, modeled domain events with intentional language. I'd have told you, with confidence, that I understood the methodology.
Then I ran a structured assessment on my own knowledge and found a gap I'd been building around for years.
Not a minor gap. Not an edge case or an advanced technique I hadn't gotten to yet. A fundamental gap in how I modeled half of every system I'd ever built.
I want to tell you what it was, because I think it reveals something about how DDD actually transfers knowledge — and why that transfer mechanism fails for more people than anyone admits.
Here's the setup. If you've worked with event-driven architecture, you know there are two sides to the system. There's the command path — where requests come in, business rules get enforced, and domain events get emitted. And there's the reactive path — where those events get consumed and the system responds.
My command path was clean. Domain-organized. Rigorous naming. Aggregates enforced invariants. Commands expressed intent in business language. Value Objects carried meaning. If you read the command side of my code, you could understand what the business did.
My reactive path was a different story.
Every function that consumed a domain event was an "event handler." That was the entire vocabulary. Hundreds of functions across multiple systems, all filed under one undifferentiated category. Some of them transformed data into read models. Some of them evaluated conditions and issued commands to other aggregates. Some of them coordinated multi-step workflows across time, tracking state across multiple events. All of them were "event handlers."
I wasn't modeling the reactive side of my systems. I was just coding it.
I wasn't modeling the reactive side of my systems. I was just coding it.
The gap became visible when I ran a structured assessment — a systematic walk through DDD's conceptual landscape, starting from philosophical foundations and working forward through tactical patterns, strategic design, and the post-Evans vocabulary that's emerged in the last twenty years.
I did this with an AI collaborator, specifically to pressure-test what I thought I knew against what I actually knew. Not to learn DDD from scratch — to find the holes that years of practice had papered over.
The hole appeared in the reactive path.
The DDD community — particularly through practitioners like Alberto Brandolini's Event Storming work and the broader Narrative-Driven Development (NDD) movement — has developed precise vocabulary for the reactive side of a system. Three concepts, each with distinct responsibilities:
A Projection is pure data transformation. An event arrives, data moves into a read model. No business logic. No decisions. Just reshaping data for consumption.
A Policy is a decision point. "When X happens, evaluate a condition, and if it holds, issue command Y." One event in, one command out. Stateless. The bridge between "something happened" and "decide what to do about it."
A Saga is coordination across time. "When X happens, do Y, then wait for Z, then if Z succeeds do W, but if Z fails compensate by doing V." Stateful. Tracks progress across multiple events and commands. No single aggregate owns the flow — the saga manages it as a long-running process.
These are not interchangeable. A projection has no business logic. A policy makes a single decision. A saga tracks state across steps. They have different responsibilities, different stateful characteristics, different failure modes, and critically, different testing strategies. Treating them as the same thing — "event handlers" — collapses three distinct architectural concerns into one bucket.
That's exactly what I'd been doing. Across every system I'd built.
And here's the part that stung: I'd read about all three. I'd encountered the terms. I could have given you a textbook definition of each one. But textbook definitions and operational understanding are completely different things. The concepts existed in my reading vocabulary but not in my modeling vocabulary. I knew the words. I didn't use them to think.
This is where Evans's deeper insight about language hits home — the one most practitioners skim past on their way to learning aggregate patterns. Evans embedded a claim in DDD that comes directly from linguistics: the Sapir-Whorf hypothesis. The idea that the language available to you shapes — and constrains — what you can think.
If you don't have a word for something, you will struggle to model it.
I didn't have operational words for the reactive path. So I didn't model it. I coded it. The functions worked. The events got processed. But the structural distinctions that would have made the architecture legible, testable, and evolvable were absent — not because the code was wrong, but because the vocabulary I was thinking in didn't make the distinctions visible.
If you don't have a word for something, you will struggle to model it. I didn't have operational words for the reactive path. So I didn't model it.
The moment the vocabulary landed — the moment "event handler" split into "projection," "policy," and "saga" as distinct concepts in my working mental model — I could suddenly see the structure that had been invisible. This function is a projection: it has no business logic, it just reshapes data. That function is a policy: it evaluates a condition and issues a command. That other function is actually a saga: it's tracking state across multiple steps, and the fact that it doesn't have explicit compensation logic is a bug, not a feature.
The assessment didn't teach me something I'd never heard of. It surfaced something I'd heard of but never internalized — because the environment in which I practiced DDD never forced the internalization.
And that's the real finding. Not the gap itself, but why the gap existed.
DDD transmits its deepest knowledge through team osmosis. You sit in a room where a senior practitioner points at a reactive flow and says "that's a policy, not a saga — here's why the distinction matters." You absorb the vocabulary not through reading but through repeated exposure in collaborative contexts where the terms carry operational weight. The naming happens in the room, during modeling sessions, through productive disagreement about what things are.
If you've never been in that room — if you're a solo practitioner, or you work on a team where nobody has that vocabulary, or you crossed into DDD from a team that practiced it implicitly without naming the patterns — those concepts stay in the reading layer. You know the words. You can define them on a whiteboard. You just don't reach for them when you're actually modeling, because they never crossed from knowledge into instinct.
DDD transmits its deepest knowledge through team osmosis. If you've never been in the room where the naming happens, the concepts stay in the reading layer.
This isn't a gap that reading fixes. I'd read about policies and sagas. The books were on my shelf. The gap persisted because reading is not the same as being challenged to apply the vocabulary under pressure, in the context of a real system, where someone asks "is that a policy or a saga?" and you have to answer with consequences.
The structured assessment recreated that challenge. It forced me to walk through my own architecture and name things — not in the abstract, but against real systems I'd built. And when the naming forced a distinction I'd never made, the gap was visible.
One gap. Found in days. After being invisible for my entire career.
That tells me something about DDD's accessibility problem that goes beyond the solo-builder framing I've been writing about. The problem isn't just that solo builders lack teams. It's that the methodology's most important knowledge transfers through a channel — collaborative practice with experienced practitioners — that has no fallback. If the channel isn't available, the knowledge doesn't transfer. Not because the practitioner is lazy or the books are bad. Because some knowledge only crystallizes under pressure, and pressure requires a counterpart.
The assessment I ran proved that the counterpart doesn't have to be a human team in a physical room. But it does have to exist. And the fact that this gap persisted — a gap the community had named, published about, and built tooling around — suggests that a lot of practitioners are carrying similar invisible gaps right now.
Not because they haven't studied. Because the transfer mechanism DDD depends on doesn't reach them.
What I'm sitting with now is a follow-up question: if one structured assessment can surface a long-standing gap in a few days, what would a systematic methodology for this kind of diagnostic look like? Not a coaching plan. Not a reading list. A repeatable discipline that any architect can apply to their own understanding and their own systems.
I have some ideas. More on that next.
This is the third post in a series on rigorous domain modeling without a team. Previously: DDD Has a Solo-Builder Problem and Knowledge Crunching Doesn't Need a Room.
Get new posts in your inbox. No spam, unsubscribe anytime.