The Gap Report: DDD's Missing Feedback Loop
In Post 4, I introduced Signal-Driven Development and its core claim: DDD has never had a definition of done. SDD provides one — zero unresolved gaps across a structured convergence process.
But I left the gap report itself as a concept. This post makes it concrete. What does a gap report actually look like? What does it measure? How does the three-pass convergence trajectory work when you're sitting in front of a real domain specification?
I'm also releasing the SDD repository — templates for gap reports, resolution logs, domain specifications, and architecture palettes, plus a complete worked example showing three-pass convergence on a fictional domain. Grab the templates and run a pass on your own system. That's not a suggestion — it's the fastest way to understand whether SDD solves a problem you have.
Anatomy of a Gap Report
A gap report evaluates a domain specification against four categories. Each gap is a question the specification hasn't answered.
Structural Gaps (SG) are missing or malformed elements. These are binary — the element exists or it doesn't. An aggregate without invariants. A command that doesn't produce a domain event. A bounded context with no declared relationships to other contexts.
Structural gaps are the easiest to identify and the most dangerous to ignore. An aggregate without invariants is a consistency boundary that enforces nothing — it's a data structure with a misleading name. Evans is explicit about this: the aggregate exists to protect invariants. If there are no invariants, there is no aggregate.
Heuristic Gaps (HG) are patterns that violate established DDD principles. Unlike structural gaps, these aren't binary — they're threshold-based. Vernon's small aggregate heuristic suggests no more than six commands per aggregate. Evans's bounded context principles suggest no more than three shared terms across contexts before you question whether the boundary is real. Saga step counts beyond five suggest decomposition.
Every heuristic has a measurable default grounded in published DDD literature. Every default is overridable — because every domain has legitimate reasons to deviate. The gap report doesn't penalize deviation. It forces you to acknowledge and document it. The difference between an architect who exceeds a heuristic intentionally and one who exceeds it accidentally is the documentation of the decision.
Language Gaps (LG) are ambiguities in the ubiquitous language. The same term used with different meanings across contexts without explicit declaration. Unnamed concepts referenced in multiple places. Overloaded terms where a single word carries two distinct domain meanings.
Language gaps are subtle and consequential. When "treatment" means both "the full clinical encounter" and "a single medical intervention," every conversation about treatments becomes ambiguous. The code will resolve the ambiguity — but it will resolve it silently, and different developers will resolve it differently.
Decision Gaps (DG) are architectural choices that haven't been made or haven't been documented. A bounded context boundary that could reasonably be drawn in two places. A relationship type that's assumed but not declared. A scope decision that's implicit rather than explicit.
Decision gaps are the gaps that define your architecture. The structural gaps and heuristic violations are usually mechanical to fix. The decision gaps require judgment, tradeoff reasoning, and the willingness to commit to a position and document why.
What a Gap Looks Like
Here's a structural gap from a real worked example — a veterinary clinic domain specification:
SG-01: Appointment aggregate has zero invariants
Severity: Error Rule: Aggregates must protect at least one invariant — an aggregate without invariants has no consistency boundary to enforce. Specification element: Appointment aggregate in Scheduling context Analysis: Appointment defines four commands and four events but no invariants. What prevents double-booking the same time slot? What prevents checking in a cancelled appointment? What prevents rescheduling to a time in the past? Without invariants, the Appointment aggregate is a data container, not a consistency boundary. Recommendation: Define invariants. At minimum: (1) No two appointments for the same veterinarian may overlap in time. (2) Appointment status must follow a valid lifecycle. (3) Rescheduled time must be in the future.
Notice the structure. The gap states what was measured (zero invariants), why it matters (no consistency boundary), what the consequences are (double-booking, invalid transitions), and what to do about it (define specific invariants). It's not a vague warning. It's an actionable diagnostic that tells you exactly where to look and what question to answer.
Here's a heuristic gap from the same domain:
HG-04: Zero sagas in a domain with multi-step processes
Severity: Warning Rule: Domains with cross-aggregate, multi-step processes typically require at least one saga. Metric: 0 sagas, 3 policies Analysis: The full visit lifecycle spans multiple aggregates across multiple contexts: Appointment → Visit → Treatment → Invoice. This is a multi-step process with potential failure points. What if treatment is started but the visit is never closed? What if the visit is closed but invoice generation fails? Policies handle the happy path. There's no compensation or failure handling. Recommendation: Evaluate the visit lifecycle as a saga candidate with compensation for each step.
And here's a decision gap:
DG-02: How does pricing work?
Severity: Error Analysis: PricingService "calculates line item prices based on treatment codes and clinic pricing rules." But there's no pricing model, no price list aggregate, no pricing configuration. Where do prices come from? Are they per-treatment-code? Per-veterinarian? Time-based? The service exists but its data model is undefined. Recommendation: Define a PriceList aggregate with pricing rules. Determine whether pricing is static or dynamic.
That decision gap is an error, not a warning, because it describes a domain concept that other parts of the specification depend on but that doesn't exist. Invoice generation references pricing. Pricing references nothing. There's a hole in the model where a concept should be.
The Three-Pass Trajectory
The gap report becomes powerful across passes. Here's what convergence actually looks like, from the veterinary clinic worked example:
Pass 1: 18 gaps identified. 5 errors, 13 warnings. The domain specification has all six aggregates named and placed. It has events flowing between contexts. It looks like a domain model. But three of six aggregates have zero invariants. The Treatment aggregate is a sequential pipeline pretending to be an aggregate. There's no saga handling a multi-step visit lifecycle. Billing has no pricing model. The veterinarian schedule doesn't exist as a domain concept.
Every gap gets resolved. The resolution log documents what was decided and why. 16 accepted as recommended, 2 accepted with modification, 0 rejected. The specification grows: invariants go from 5 to 12, a saga is introduced, two new aggregates are added, a language overload is fixed.
Pass 2: 5 gaps. Zero errors, 5 warnings. The structural problems are gone. What remains are refinement concerns — the saga needs a timeout, the walk-in path has an event ordering dependency, a pricing snapshot rule needs to be explicit. These are the decisions that define the architecture's resilience, not its structure.
All five resolved. The invariant count climbs from 12 to 18. Every aggregate now has at least one invariant. Every cross-context relationship is declared with a type. Every scope decision is documented.
Pass 3: Zero gaps. Zero errors. Zero warnings. Converged.
The trajectory — 18 → 5 → 0 — is the signal. It tells you the methodology is working. Each pass reduces the gap count because the previous pass's resolutions addressed the root causes, not just the symptoms. When you fix the aggregate that has no invariants, you also fix the downstream gaps that depended on that aggregate having a consistency boundary. Foundational decisions resolve first; dependent decisions cascade.
If Pass 2 had produced more gaps than Pass 1, that would be the most important signal: the specification is diverging, not converging. Something is structurally wrong — probably a foundational boundary decision that's incorrect, causing every resolution to introduce new inconsistencies. Non-convergence means stop, revisit the boundaries, and restart the pass.
The Resolution Log
The gap report identifies questions. The resolution log records answers. Every resolution documents three things:
The decision: What was chosen and what was rejected. An aggregate without invariants can be fixed by adding invariants (if it's a real consistency boundary) or dissolved (if it isn't). The resolution records which path was taken.
The rationale: Why this decision was made. This is the most valuable artifact SDD produces. Six months from now, when someone asks "why is this a saga instead of a policy chain?" the resolution log has the answer — with the gap that prompted the question, the alternatives considered, and the reasoning that led to the current design.
The structural impact: What changed in the specification. "+1 saga, +1 event, +1 invariant." This makes the change traceable and auditable. Every element in the final specification traces back to either the initial extraction or a specific gap resolution.
The resolution log is the architecture decision record that DDD always needed but never formalized at the domain modeling level. ADRs capture decisions about technology choices and system-level architecture. Resolution logs capture decisions about domain model structure — why this aggregate exists, why this boundary is drawn here, why this invariant matters.
Running Your Own Pass
You don't need tooling to try this. You need a domain specification and the gap report template.
Step 1: Pick a bounded context in your system. Write the domain specification — name every aggregate, every command, every event, every invariant, every policy, every saga. Make every relationship explicit. If you can't name it, it's a gap.
Step 2: Run the gap report against it. For each aggregate, ask: does it have invariants? For each command, ask: does it produce an event? For each bounded context, ask: are its relationships declared? Check the heuristic thresholds — command density, term overlap, saga step count. Look for language overloads and undocumented decisions.
Step 3: Write the resolution log. For every gap, decide: change the model or document why the current design is intentional. Record the rationale.
Step 4: Update the specification with the resolutions and run the gap report again. The gap count should drop. If it does, you're converging. If it doesn't, revisit your boundary decisions.
The SDD repository has everything you need — templates for all four artifacts and a complete worked example showing three-pass convergence. The veterinary clinic example walks through 18 gaps across three passes with full resolution rationale for every decision.
Why This Matters
The gap report solves a problem that every DDD practitioner has felt but few have named: the anxiety of not knowing whether the model is done.
You finish a domain modeling session. The event storming board is covered in stickies. The bounded contexts feel right. The aggregates have names. But there's a nagging uncertainty — did we miss something? Are the boundaries correct? Is that aggregate doing too much? Is that policy actually a saga?
Without a gap report, the only way to answer those questions is experience. The architects who've seen dozens of domain models can spot the patterns. The architects who haven't can't — and they won't know what they missed until implementation reveals it.
The gap report makes the experienced architect's intuition explicit, measurable, and transferable. It asks the questions that a senior DDD practitioner would ask. It flags the patterns that Evans, Vernon, and Brandolini documented. It forces the decisions that matter into the open where they can be examined.
Each gap is the question an experienced practitioner would ask. SDD asks it for you.
What Comes Next
Post 5 gives you the methodology to try. The repository gives you the tools.
But the gap report revealed something I didn't anticipate when I first built this process. When AI enters the domain modeling pipeline — when the specifications aren't authored by humans but generated by language models — a new category of failure emerges. Structurally valid models that are semantically wrong. Patterns that pass every gap report check but misrepresent the domain's actual behavior.
Post 6 introduces the first pattern that SDD surfaced about AI-mediated domain modeling: the Candidate Lifecycle.
This is Post 5 of a series on DDD, AI, and the methodology that emerged from practicing both rigorously. Post 4 introduced Signal-Driven Development. The series continues with AI-mediated domain modeling patterns starting in Post 6.
Get new posts in your inbox. No spam, unsubscribe anytime.