DDD has a solo-builder problem, and nobody talks about it.
Every serious architect I know agrees: if you're building software for a complex domain, Domain-Driven Design is the gold standard. Evans was right. Vernon was right. The methodology works.
But here's the thing nobody says out loud — it was designed for teams.
Knowledge crunching, the beating heart of DDD, is a collaborative act. Evans describes it as continuous conversation between developers and domain experts. Two or more humans in a room, wrestling with ambiguity until a shared model emerges. The developer learns the domain. The expert learns to see their own knowledge structured differently. The model is the artifact of that collision.
Event storming? You need a wall, sticky notes, and a room full of people who disagree productively. Context mapping? That's a negotiation between teams about how their models relate. Strategic design? It assumes you have multiple bounded contexts owned by multiple groups, and the hard part is drawing the boundaries between them.
Every foundational practice in DDD assumes you're not alone.
So what happens when you are?
There are thousands of architects building serious systems by themselves. Startup founders. Solo technical leads. Indie builders working on domains complex enough to warrant DDD — healthcare, finance, logistics, compliance — but without a cross-functional team to practice it with.
They know DDD is the right approach. They've read the blue book. Probably the red book too. They can recite the tactical patterns in their sleep: aggregates, entities, value objects, domain events, repositories.
But the strategic work — the part that actually matters — requires a kind of collaboration they don't have access to.
What most of them do instead is predictable and understandable. They adopt the tactical patterns without the strategic discipline. They define aggregates without doing bounded context analysis. They emit domain events without modeling policies or sagas. They build what looks like DDD from the outside but is really CRUD with fancier names.
This isn't a criticism. It's a structural problem with the methodology's delivery mechanism.
DDD transfers knowledge through team osmosis. The junior developer absorbs strategic design instincts by sitting in rooms where those conversations happen. The mid-level architect internalizes bounded context boundaries by watching senior practitioners negotiate them across teams. The vocabulary gets transmitted through practice, not just through books.
If you don't have the room, the team, or the senior practitioner next to you, whole layers of the methodology become invisible. Not wrong — invisible. You don't know what you're missing because the gaps are in the implicit knowledge that DDD assumes you'll acquire through collaboration.
DDD transfers knowledge through team osmosis. If you don't have the room, the team, or the senior practitioner next to you, whole layers of the methodology become invisible.
I'll give you one example from my own experience. I've been practicing DDD for over a decade. I'd have called myself competent. Comfortable with aggregates, event sourcing, CQRS, the whole stack.
Then I ran a structured assessment on my own understanding and found a gap I didn't know existed: I had no vocabulary for the reactive path.
My command-side architecture was clean. Domain-organized. Rigorous language. But the event-handling side — the part where "something happened" turns into "decide what to do about it" — was an undifferentiated mass of event handlers. No structural distinction between a policy, a saga, and a projection. All lumped under "event processing."
I wasn't modeling half of my system. I was just coding it.
That gap didn't come from laziness or lack of reading. It came from never having sat in a room where someone pointed at a reactive flow and said "that's a policy, that's a saga, and here's why the distinction matters." The concepts existed in the literature. The community had named these patterns clearly. But without the team context where that vocabulary gets transmitted through practice, I'd been building around the gap for years without seeing it.
That's one architect. One gap. Found after a decade.
Now multiply that across every solo practitioner who's doing their best with DDD but missing the collaborative forcing functions that the methodology was designed to run on.
The patterns are accessible. The books are available. The conference talks are on YouTube. None of that replaces the thing DDD actually depends on: two or more people challenging each other's understanding of a domain until the model gets sharper.
This raises a question I've been sitting with.
Is DDD's accessibility barrier the methodology itself? Or is it the assumption that practicing it requires a team?
Because if it's the latter — if the hard part isn't the concepts but the delivery mechanism — then the question isn't how to simplify DDD. It's how to give solo practitioners access to the collaborative pressure that makes it work.
That's not a tooling question. It's not about generating code or automating boilerplate. It's about whether you can recreate the conditions under which domain models get refined: sustained challenge, systematic feedback, and a counterpart that holds the full model in working memory while you interrogate specific corners of it.
I don't think the answer is "just find a team." For a lot of builders, the team isn't an option. The domain is still complex. The system still needs to be built right.
DDD has a solo-builder problem. And until we acknowledge it, thousands of architects will keep building halfway — tactical patterns without strategic discipline, domain events without the conversations that give them meaning.
The methodology works. The question is who gets to use it.
This is the first post in a series exploring what rigorous domain modeling looks like when you don't have a team in the room. More soon.
Get new posts in your inbox. No spam, unsubscribe anytime.