Introducing Signal-Driven Development
Not "done" in the project management sense — not "the sprint ended" or "the stakeholders signed off." Done in the engineering sense. Structurally complete. Semantically consistent. Ready for implementation with confidence that the architecture will hold.
Evans never answered this question. Neither did Vernon, Brandolini, or anyone else in the DDD community. And it's not because the question is unimportant — it's because the traditional answer was always implicit. You knew the model was done when the team stopped finding new insights during knowledge crunching. When the conversations with domain experts stopped producing surprises. When the event storming board stabilized.
That's not a definition of done. That's a feeling.
I spent the better part of a year doing rigorous domain modeling across multiple products — complex domains with regulatory concerns, event-sourced architectures, cross-context dependencies, and temporal behaviors that don't fit neatly into Evans's spatial model. I did this work as a solo practitioner with an AI collaborator, using the approach I described in the first three posts of this series.
What emerged wasn't just a set of domain models. It was a methodology. One with a measurable definition of done, a repeatable convergence process, and a feedback loop that makes domain modeling adversarial in the best sense of the word.
I'm calling it Signal-Driven Development.
The Core Problem: DDD Has No Feedback Loop
Domain-Driven Design gives you an extraordinary vocabulary for modeling complex systems. Bounded contexts. Aggregates. Domain events. Policies. Sagas. The building blocks are precise, expressive, and battle-tested across two decades of practice.
What DDD doesn't give you is a way to know when you've used them correctly.
Consider the typical DDD workflow. You do knowledge crunching — workshops, event storming sessions, whiteboard conversations with domain experts. You iterate on the model. You refine bounded context boundaries. You identify aggregates and their invariants. At some point, someone says "I think we're good" and the team moves to implementation.
But "I think we're good" is a subjective assessment. There's no structural verification. No way to measure whether the model is complete, whether the boundaries are consistent, whether the heuristics that Evans and Vernon established are actually being honored. The model might have an aggregate with twelve commands and no invariants — a consistency boundary that enforces nothing. It might have two bounded contexts sharing fifteen terms with identical definitions — contexts that aren't actually bounded. It might have a policy that emits six commands in response to a single event — a stateless reaction doing the work of a saga.
These aren't obscure edge cases. They're the gaps that every experienced DDD practitioner has learned to spot through years of pattern recognition and hard-won intuition. The gaps that junior architects miss entirely. The gaps that AI-generated models introduce systematically, because an LLM has no intuition — only statistical pattern completion.
DDD needs a feedback loop. Not a checklist. A diagnostic system that examines a domain specification structurally, measures it against the heuristics that the DDD community has established over twenty years, and produces a report that tells you exactly what's incomplete, what's inconsistent, and what violates the principles you claim to follow.
That feedback loop is what I'm calling a gap report. And the methodology built around it is Signal-Driven Development.
Three-Pass Convergence
SDD's core mechanic is structured convergence through iterative gap resolution. The process works like this:
Pass 1 produces a domain specification — the full structural model of your domain expressed in DDD building blocks. Bounded contexts, aggregates, domain events, commands, policies, sagas, projections, invariants, value objects, domain services. Everything named, everything placed, every relationship explicit. The gap report for Pass 1 identifies what the specification cannot answer: missing invariants, boundary violations, heuristic threshold breaches, methodology process gaps. In a complex domain, Pass 1 typically surfaces 20 to 35 gaps.
Pass 2 resolves those gaps. This is where the real architecture happens. You're not generating a model — you're interrogating one. Every gap is a question the specification couldn't answer. Some gaps resolve by adding missing elements (an aggregate without invariants needs invariants, or it needs to be dissolved). Some resolve by restructuring (two contexts with heavy term overlap need a boundary reassessment). Some resolve by making an explicit architectural decision and documenting why. The gap count drops — typically to 5 to 10. If it doesn't drop, the specification is diverging rather than converging, and that divergence is itself a diagnostic signal.
Pass 3 drives to zero. The remaining gaps are usually the hardest — the architectural decisions that require genuine tradeoff reasoning. A saga with seven steps that might need decomposition. A bounded context whose name doesn't match its actual responsibility. An aggregate whose event fan-out suggests it's doing too much. These are the decisions that experienced architects agonize over in whiteboard sessions. SDD forces them into the open by making them measurable.
The definition of done is zero unresolved gaps. Not "zero gaps identified" — gaps will always be identified. Zero unresolved gaps. Every gap has been examined, and for each one, the architect has either changed the model to address it or documented why the current design is intentional. The resolution is the artifact, not the absence of the finding.
Some domains require four or five passes. The three-pass label describes the typical trajectory, not a hard constraint. The invariant is convergence: each pass must reduce the gap count. If it doesn't, something is structurally wrong with the specification, and that non-convergence is the most important signal the process can produce.
Gap Reports as Diagnostic Signals
The gap report is the heart of SDD. It's not a test suite. It's not a linter output. It's a structured diagnostic that evaluates a domain specification against three categories of concern.
Structural completeness asks whether the specification has the elements it needs. Does every aggregate have at least one invariant? Does every bounded context have a clear linguistic boundary? Are there commands without corresponding domain events? Are there domain events that no policy or projection reacts to? These aren't style preferences — they're the structural expectations that Evans and Vernon established. An aggregate without invariants isn't a design choice; it's a consistency boundary that enforces nothing.
Heuristic thresholds measure whether the specification honors the quantitative guidelines the DDD community has developed through practice. Vernon's small aggregate heuristic suggests no more than six commands per aggregate. Context term overlap beyond three shared definitions suggests insufficient separation. Saga step counts beyond five suggest decomposition is needed. These thresholds aren't arbitrary — they're grounded in two decades of published work by Evans, Vernon, Brandolini, and others. They're configurable, because every domain has legitimate reasons to deviate. But deviations should be conscious decisions, not invisible accidents.
Methodology process gaps verify the discipline of the modeling process itself. Does every architectural decision have a documented rationale with at least one rejected alternative? Have gap resolutions been traced to specific changes in the specification? Is the gap count decreasing across passes? These are the meta-rules — the rules about the process, not the model.
The critical insight is that gap reports are signals, not verdicts. A gap doesn't mean the model is wrong. It means the model has a question it hasn't answered. The architect reads the signal, investigates, and either changes the model or documents why the current design is correct. Both outcomes are valid. The gap report's job is to surface the question. The architect's job is to answer it.
This is what makes SDD adversarial in a productive way. The gap report is the colleague who keeps asking "but why?" — not to obstruct, but to force the kind of rigorous reasoning that produces architectures you can defend under scrutiny.
The Architecture Palette
The third artifact in SDD — alongside the domain specification and the gap report — is the architecture palette. It's a visual projection of the domain specification expressed in DDD building blocks, organized by bounded context, showing the relationships between aggregates, events, commands, policies, and sagas.
The palette serves two purposes. First, it's a communication artifact. A domain specification can be hundreds of elements across dozens of pages. The palette compresses that into a visual map that an architect can hold in working memory. Second, it's a verification surface. Structural patterns that are invisible in a textual specification become obvious in a visual layout — an aggregate that's connected to everything, a bounded context with no outbound events, a saga that spans three contexts when it should span two.
The palette is the thing you put on the wall. The specification is the thing you trust. The gap report is the thing that tells you whether the specification deserves that trust.
What This Looks Like in Practice
I've run this process across eight product domains over the past year. Not toy examples — production systems with regulatory requirements, event-sourced architectures, and cross-product dependencies. Here's what the convergence trajectory actually looks like.
A typical Pass 1 gap report surfaces 20 to 30 findings. Five to eight are structural errors — missing invariants, aggregates without consistency boundaries, commands that don't produce events. Ten to fifteen are heuristic threshold violations — aggregates with too many commands, contexts with overlapping vocabulary, policies doing the work of sagas. The rest are methodology gaps — architectural decisions made without documented alternatives, gap resolutions without traceability.
Pass 2 resolves all of them. The structural errors are usually straightforward — add the missing invariant, dissolve the aggregate that has no reason to exist, split the policy into a saga. The heuristic violations require judgment — sometimes the threshold is right and the model needs to change, sometimes the domain genuinely requires a larger aggregate and the override needs to be documented. The methodology gaps are discipline — go back and document the reasoning.
Pass 3 finds the residuals. In a complex domain, there are usually two to five gaps that survived Pass 2 — often the ones that require genuine architectural tradeoffs. A bounded context boundary that could reasonably be drawn in two places. A saga decomposition that improves one metric at the cost of another. These are the decisions that define the architecture, and SDD's contribution is forcing them into explicit, documented, measurable resolution rather than leaving them as implicit assumptions buried in the code.
By the end of Pass 3, the gap count is zero. Every structural element has been verified. Every heuristic threshold has been honored or consciously overridden. Every architectural decision has been documented with alternatives considered and rationale recorded.
That's a definition of done.
The Part We Didn't Expect
I want to be transparent about the intellectual path that led here.
SDD emerged from practice. I didn't start with a methodology and apply it. I started with a problem — how do you do rigorous domain modeling without a room full of people? — and iterated on the process until something repeatable crystallized. The three-pass convergence, the gap report categories, the architecture palette, the definition of done — all of it came from doing the work and noticing what worked.
I didn't research what the DDD community's leading voices were currently publishing until months after the methodology had stabilized. When I finally did — when I read Evans's Explore DDD 2024 keynote, when I looked at what Domain Language is now focused on, when I read Khononov's work on coupling as a measurable heuristic — the convergence was startling.
Evans is now focused on integrating AI into domain-rich systems while preserving design integrity. His keynote framing — that a trained LLM on a ubiquitous language is effectively a bounded context — is the same conclusion I reached independently while building the constraints that keep AI output within a closed DDD vocabulary. Same destination, completely different paths. He's since published a follow-up on context mapping with AI-based components — drawing an Anti-Corruption Layer between deterministic application code and probabilistic LLM output. We built the same pattern independently.
Khononov's Balancing Coupling in Software Design (Addison-Wesley, 2024) formalizes coupling as a measurable design heuristic with an optimizable function — the same pattern as SDD's threshold model. Take the qualitative principles Evans established, make them quantitative, set configurable thresholds, measure against them. He arrived at it through academic rigor. I arrived at it through building a system that needed to verify domain models automatically.
DDD Europe 2026 has workshops on accelerating strategic design with large language models — Thomas Coopman's two-day session in Antwerp this June. The community is mainstreaming the intersection of AI and DDD as a topic. We've been living in that intersection for months.
I'm not claiming priority. I'm observing convergence. When independent practitioners arrive at the same conclusions from different starting points, it's not a coincidence — it's the problem asserting its own shape. The DDD community is converging on the need for measurable heuristics, AI-mediated modeling, and structural verification because those are the problems that surface when you take DDD seriously at scale. Whether you start from Evans's theory or from a solo practitioner's frustration, the same walls appear.
The domain specification that emerged from three-pass convergence was structurally complete enough to verify context provenance — without a single line of implementation code. The design was the proof.
That's not a theoretical claim. That's a measured result from applying this methodology to a real product domain. The specification produced by SDD's convergence process contained enough structural information that compliance verification could be performed against the domain model directly — before any runtime existed to test against. The architecture didn't need to be built to be verified. It needed to be modeled rigorously enough that verification was a projection of the model itself.
What SDD Is Not
SDD is not a replacement for DDD. It's an extension. Evans gave us the building blocks — the vocabulary for decomposing complex domains into bounded contexts, aggregates, and domain events. Vernon gave us the implementation patterns — the tactical guidance for turning those building blocks into working code. Brandolini gave us the discovery vocabulary — Event Storming as a method for collaborative knowledge crunching. Narrative-Driven Development gave us the temporal dimension — the recognition that domains exist in time, not just in space.
SDD gives DDD a feedback loop and a definition of done.
The gap report doesn't replace knowledge crunching. It makes knowledge crunching measurable. The three-pass convergence doesn't replace architectural intuition. It forces intuition into the open where it can be examined, challenged, and documented. The architecture palette doesn't replace event storming boards. It persists them.
If you practice DDD, you can practice SDD tomorrow. The process is the same — model the domain, interrogate the model, refine the model. SDD adds structure to the interrogation and a measurable endpoint to the refinement.
What Comes Next
The templates are coming — gap report templates, architecture palette formats, domain specification structures. Everything you need to run a three-pass convergence on your own domain. I'll share them through a public repository designed for practitioners who want to try SDD on a real project, not a tutorial exercise.
Post 5 will go deep on the gap report itself — what the categories look like, how the three-pass trajectory works in detail, and how to read the signals that a gap report produces. That's where the methodology becomes concrete enough to apply.
But the gap report is just the beginning. SDD surfaced patterns I didn't anticipate — patterns about what happens when AI enters the domain modeling process, about what structural verification reveals when it catches problems that humans can't see, about what a rigorous feedback loop does to the quality of architectural decisions over time.
Those patterns are the subject of the rest of this series.
SDD doesn't replace DDD. It gives DDD a definition of done.
This is Post 4 of a series on DDD, AI, and the methodology that emerged from practicing both rigorously. Posts 1–3 established the solo-builder problem, the AI collaboration model, and the vocabulary gap. The series continues with a deep dive into the gap report in Post 5.
Get new posts in your inbox. No spam, unsubscribe anytime.