<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Coada — Writing</title><description>Notes on engineering, infrastructure, and the systems we build at Coada.</description><link>https://blog.coada.dev/</link><language>en-us</language><item><title>The Jobs Aren&apos;t Back. They&apos;re Different Jobs.</title><link>https://blog.coada.dev/the-jobs-aren-t-back-they-re-different-jobs/</link><guid isPermaLink="true">https://blog.coada.dev/the-jobs-aren-t-back-they-re-different-jobs/</guid><description>The headlines are warm again.
Software engineering roles have doubled since mid-2023. Sixty-seven thousand open positions. Listings up 11% year-over-year. Three openings for every qualified candidate.</description><pubDate>Thu, 09 Apr 2026 23:27:50 GMT</pubDate><content:encoded>&lt;p&gt;The headlines are warm again.&lt;/p&gt;
&lt;p&gt;Software engineering roles have doubled since mid-2023. Sixty-seven thousand open positions. Listings up 11% year-over-year. Three openings for every qualified candidate. Salary growth averaging 4.2%.&lt;/p&gt;
&lt;p&gt;If you&apos;re reading those numbers from a hiring desk, the crisis is over. If you&apos;re reading them from a couch where you&apos;ve been sitting for fourteen months sending applications into a void, the crisis hasn&apos;t even started being honest with you yet.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Counting Problem&lt;/h2&gt;
&lt;p&gt;The Bureau of Labor Statistics counts an opening. It doesn&apos;t ask whether the person who just lost their job at that same company can fill it.&lt;/p&gt;
&lt;p&gt;Indeed counts a listing. It doesn&apos;t track how many of those listings sit open for nine months because the company would rather leave a seat empty than hire someone who needs to be retrained.&lt;/p&gt;
&lt;p&gt;When a company says &quot;we have three openings for every qualified candidate,&quot; that&apos;s not a labor shortage. That&apos;s a skills mismatch being reported as a market recovery.&lt;/p&gt;
&lt;p&gt;Here&apos;s how the math actually works. A Fortune 500 cuts 400 mid-level engineers in January. Posts 200 &quot;AI-native platform engineer&quot; roles in March. The net looks like growth. The headlines report recovery. And 400 people who built production systems for a decade are still unemployed, watching job boards full of openings they don&apos;t qualify for.&lt;/p&gt;
&lt;p&gt;That&apos;s not recovery. That&apos;s replacement.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;What Actually Changed in Three Years&lt;/h2&gt;
&lt;p&gt;The job that got eliminated didn&apos;t get reinstated. It got restructured into something the person who held it might not recognize.&lt;/p&gt;
&lt;p&gt;Three years ago, the market valued the engineer who could write React components and ship features on a two-week sprint cadence. That job is compressed now. AI handles the output layer. What&apos;s left is the judgment layer — system design, domain modeling, integration architecture, failure mode analysis, trade-off reasoning under ambiguity.&lt;/p&gt;
&lt;p&gt;The people who thrived writing code are not automatically the people who thrive deciding what code should exist.&lt;/p&gt;
&lt;p&gt;That&apos;s not a semantic distinction. It&apos;s a career-ending one for a lot of people who were very good at their jobs.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Cross-Disciplinary Divide&lt;/h2&gt;
&lt;p&gt;Watch who&apos;s actually landing in the new market. It&apos;s not the deepest specialists. It&apos;s the widest thinkers.&lt;/p&gt;
&lt;p&gt;The developer who also understood product. The backend engineer who also did infrastructure. The architect who&apos;d sat in sales calls and heard what the customer actually needed versus what the ticket said.&lt;/p&gt;
&lt;p&gt;They didn&apos;t just learn to code. They learned to think across boundaries. And now, paired with AI, those people are producing output that would have required a team of six three years ago.&lt;/p&gt;
&lt;p&gt;Meanwhile, the engineer who spent a decade getting exceptionally good at one framework, one layer of the stack, one narrow slice of the pipeline — that expertise was valuable when the floor existed.&lt;/p&gt;
&lt;p&gt;The floor is gone.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Retraining Lie&lt;/h2&gt;
&lt;p&gt;Every time a market shifts like this, the same reassurance shows up: &quot;People just need to reskill.&quot; It sounds reasonable. It&apos;s almost never true at the pace the market demands.&lt;/p&gt;
&lt;p&gt;Architecture isn&apos;t a certification. Lateral thinking isn&apos;t a bootcamp. The judgment that makes someone valuable in the current market was built over years of cross-disciplinary work — product decisions, infrastructure trade-offs, customer conversations, failed projects, recovered projects.&lt;/p&gt;
&lt;p&gt;You can&apos;t speedrun that.&lt;/p&gt;
&lt;p&gt;A six-month AI engineering course doesn&apos;t give you the instinct to know when an architecture will collapse under scale. Watching a tutorial on system design doesn&apos;t teach you how to make trade-offs between consistency and availability while a product manager is staring at you waiting for a yes. Those are skills you earn by being wrong enough times to recognize the shape of the next mistake before you make it.&lt;/p&gt;
&lt;p&gt;The people telling displaced engineers to &quot;just learn AI&quot; are offering the same advice as telling a taxi driver to &quot;just become a software engineer&quot; in 2014. Technically possible. Statistically rare. And the people giving the advice are never the ones who have to take it.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Who the Numbers Are Actually Counting&lt;/h2&gt;
&lt;p&gt;Let&apos;s be precise about what &quot;67,000 open positions&quot; means.&lt;/p&gt;
&lt;p&gt;It means companies are building AI products and need engineers to build them. It means the demand is real. None of that is in dispute.&lt;/p&gt;
&lt;p&gt;What&apos;s in dispute is who those jobs are for.&lt;/p&gt;
&lt;p&gt;They&apos;re for engineers who already think in systems. Who already model domains. Who already design for failure. Who already understand distributed architecture, event-driven patterns, and the difference between structural correctness and semantic correctness.&lt;/p&gt;
&lt;p&gt;They&apos;re not for the mid-level developer who shipped features reliably for eight years and suddenly finds that &quot;reliable feature shipping&quot; is exactly the job AI compresses first.&lt;/p&gt;
&lt;p&gt;The cruelty of this market is that the people it displaced are often genuinely good engineers. They didn&apos;t fail. The definition of the job changed underneath them. And the new definition requires a fundamentally different kind of thinking that takes years to develop — years they may not have, because the market moved in months.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Uncomfortable Part&lt;/h2&gt;
&lt;p&gt;When someone says &quot;AI won&apos;t replace developers, it&apos;ll just change what developers do&quot; — that&apos;s a comforting lie dressed up as optimism.&lt;/p&gt;
&lt;p&gt;AI is already replacing developers. Entire categories of engineering output that required human labor eighteen months ago are now handled by tools that didn&apos;t exist then. The companies doing the replacing are the same ones posting those 67,000 job openings. They&apos;re not contradicting themselves. They&apos;re replacing one kind of engineer with a different kind.&lt;/p&gt;
&lt;p&gt;The honest framing isn&apos;t &quot;the jobs are back.&quot; It&apos;s: &quot;new jobs exist, and they require capabilities that a large portion of the displaced workforce doesn&apos;t have.&quot;&lt;/p&gt;
&lt;p&gt;That&apos;s not a recovery. That&apos;s a restructuring. And a restructuring without an honest accounting of who&apos;s left behind isn&apos;t optimism. It&apos;s negligence.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;What the Numbers Won&apos;t Tell You&lt;/h2&gt;
&lt;p&gt;They won&apos;t tell you about the senior engineer with fifteen years of experience who&apos;s been interviewing for eleven months and keeps getting rejected for roles that didn&apos;t exist when they were laid off.&lt;/p&gt;
&lt;p&gt;They won&apos;t tell you about the teams that post openings they never intend to fill — headcount budget theater for the board while the actual work gets absorbed by three people and an AI tool.&lt;/p&gt;
&lt;p&gt;They won&apos;t tell you about the companies lowballing experienced engineers 50 to 70 percent below market rate, because a flooded market means someone desperate enough will take it.&lt;/p&gt;
&lt;p&gt;They won&apos;t tell you that &quot;three openings for every qualified candidate&quot; is an indictment, not a celebration. It means the market broke the pipeline. It spent three years laying off the mid-career engineers who would have been next in line, and now it&apos;s shocked that there&apos;s nobody to fill the senior roles.&lt;/p&gt;
&lt;p&gt;You don&apos;t get to gut the middle of the ladder and then complain about a talent shortage at the top.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Real Question&lt;/h2&gt;
&lt;p&gt;The real question isn&apos;t whether engineering jobs are &quot;back.&quot; It&apos;s whether the industry is willing to be honest about what it did.&lt;/p&gt;
&lt;p&gt;It overhired during a bubble. It cut aggressively to protect stock prices. It blamed AI for decisions that were really about margins. And now it&apos;s celebrating &quot;recovery&quot; while a generation of mid-career engineers — people who built the systems these companies run on — sit on the other side of a skills gap that didn&apos;t exist three years ago.&lt;/p&gt;
&lt;p&gt;The jobs aren&apos;t back. Different jobs showed up. And the people who lost the old ones are being told the market is fine while staring at a wall of postings they can&apos;t qualify for.&lt;/p&gt;
&lt;p&gt;That&apos;s not recovery. That&apos;s denial with a press release.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;em&gt;Stop counting openings. Start counting placements. The gap between those two numbers is the actual state of the engineering market — and nobody wants to talk about it.&lt;/em&gt;&lt;/p&gt;
</content:encoded></item><item><title>I Stopped Correcting 40% of My AI&apos;s Work.</title><link>https://blog.coada.dev/i-stopped-correcting-40-of-my-ai-s-work/</link><guid isPermaLink="true">https://blog.coada.dev/i-stopped-correcting-40-of-my-ai-s-work/</guid><description>Here&apos;s What Changed.
I run product and engineering for regulated healthcare software. Over the past year, I&apos;ve shipped three products using AI as a core member of the delivery team — not for code sugg</description><pubDate>Wed, 08 Apr 2026 07:40:08 GMT</pubDate><content:encoded>&lt;p&gt;Here&apos;s What Changed.&lt;/p&gt;
&lt;p&gt;I run product and engineering for regulated healthcare software. Over the past year, I&apos;ve shipped three products using AI as a core member of the delivery team — not for code suggestions or autocomplete, but for full execution: architecture, domain modeling, compliance documentation, sprint planning, pull requests.&lt;/p&gt;
&lt;p&gt;Early on, I was correcting roughly 40% of what the AI delivered. Not hallucinations or obvious errors — structural misalignment. The code was clean, the documents were well-written, and the logic was sound. But 4 out of 10 deliverables didn&apos;t match the intent. A story would be implemented against the wrong architectural assumption. A document would reference a pattern we&apos;d discussed but never decided on. A pull request would make a reasonable judgment call that happened to be the wrong one.&lt;/p&gt;
&lt;p&gt;That 40% correction rate was consistent across projects, across domains, across complexity levels. It wasn&apos;t a capability problem. It was a structural one.&lt;/p&gt;
&lt;p&gt;Today, across three shipped products, that number is roughly 5%.&lt;/p&gt;
&lt;p&gt;The difference isn&apos;t a better model. It&apos;s a methodology.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Problem Isn&apos;t Intelligence. It&apos;s Ambiguity.&lt;/h2&gt;
&lt;p&gt;When you hand an AI a story that says &quot;implement the patient authorization flow,&quot; the AI will deliver something. It will make decisions about state management, error handling, API boundaries, security constraints, and data persistence. It will make those decisions confidently, because that&apos;s what large language models do — they produce plausible outputs.&lt;/p&gt;
&lt;p&gt;The problem is that &quot;plausible&quot; and &quot;correct&quot; diverge exactly at the decision points that matter most. The AI doesn&apos;t know that your team decided last month to isolate token storage in a dedicated service. It doesn&apos;t know that your compliance framework requires audit events at the command level, not the API level. It doesn&apos;t know that the domain expert deferred the acuity algorithm to a later phase, so the placeholder invariant in the spec is intentional, not an oversight.&lt;/p&gt;
&lt;p&gt;Every ambiguous decision point in a ticket is a coin flip. The AI will resolve it, but it won&apos;t tell you it&apos;s guessing. And the resolution will be internally consistent, well-documented, and wrong in ways that require deep domain knowledge to catch.&lt;/p&gt;
&lt;p&gt;This is why smarter models don&apos;t fix the problem. The bottleneck was never reasoning capability. It was the absence of explicit, locked decisions upstream of execution.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Three Layers That Eliminated Guessing&lt;/h2&gt;
&lt;p&gt;I built a methodology — iteratively, over the course of these three products — that moves every material decision upstream of execution. It has three layers, and they must be completed in sequence.&lt;/p&gt;
&lt;h3&gt;Layer 1: Regulatory Foundation&lt;/h3&gt;
&lt;p&gt;Before any product design work begins, the compliance scaffolding gets built. For healthcare, that means a complete Quality Management System: policies, standard operating procedures, forms, evidence records, and a control mapping to whatever regulatory framework applies.&lt;/p&gt;
&lt;p&gt;This isn&apos;t checkbox compliance. Every SOP becomes a governing document that execution references. When a story says &quot;per SOP-014,&quot; that SOP exists, it&apos;s specific to the organization&apos;s architecture and tooling, and it defines exactly what evidence the story must produce. The AI doesn&apos;t interpret regulatory requirements at implementation time — the interpretation was done during the compliance authoring phase and locked in a controlled document.&lt;/p&gt;
&lt;p&gt;On the first product, a two-person team (me and the AI) produced 50 controlled documents in roughly 30 hours. Traditional timeline for that scope is 4–6 weeks with a dedicated compliance team.&lt;/p&gt;
&lt;h3&gt;Layer 2: Signal-Driven Design&lt;/h3&gt;
&lt;p&gt;This is where the product&apos;s architecture gets defined — not as a set of diagrams or a PRD that engineering interprets, but as a formal domain model that converges to zero ambiguity through iterative adversarial passes. I call this methodology &lt;a href=&quot;https://sdd.mmmnt.dev/&quot;&gt;Signal-Driven Design (SDD)&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;SDD draws from domain-driven design, event storming, and user journey mapping, but its core mechanic is adversarial convergence. The process works like this:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Extract a domain specification from the PRD and architectural decision records. This produces bounded contexts, aggregates, commands, events, invariants, policies, and sagas — the full vocabulary of what the system does.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run an adversarial gap analysis against the specification you just produced. The same session that wrote the spec tries to break it: structural gaps (missing aggregates, orphaned events), heuristic gaps (oversized aggregates, thin contexts), language gaps (inconsistent terminology), and decision gaps (unresolved architectural questions).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Resolve every gap collaboratively, one at a time. Some are mechanical fixes the AI handles as an interpreter. Others require architectural judgment — those escalate to me. The role separation is explicit: the AI proposes, I decide, and the decision gets recorded with rationale.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Regenerate the full specification incorporating all resolutions and run the adversarial analysis again. Repeat until gaps hit zero.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;On every product, this converges in three passes. The gap trajectory is predictable by pass two — you can see whether the model is tightening or oscillating. What matters is that when it converges, every command, every event, every invariant has been examined adversarially and either confirmed or corrected. There are no assumptions left in the specification.&lt;/p&gt;
&lt;p&gt;The items that can&apos;t be resolved — because a domain expert hasn&apos;t made the call yet, or a technical evaluation hasn&apos;t happened — go into a deferred resolutions tracker with an explicit owner. They don&apos;t get guessed at. They get flagged as implementation-blocking and carried forward visibly.&lt;/p&gt;
&lt;h3&gt;Layer 3: Execution Planning With Closed Boundaries and Enforced Quality Gates&lt;/h3&gt;
&lt;p&gt;The converged domain specification feeds into an execution plan manifest — a single document that maps every milestone, epic, and story to the domain model, the governing ADRs, and the regulatory SOPs. Every story has:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A closed boundary defining exactly which files it creates, modifies, or reads&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Machine-verifiable exit criteria tied to the domain specification&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;References to the specific governing documents that define how it&apos;s built&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Parallelization tags based on file-level conflict analysis between stories&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;But the manifest alone is just a document. What makes it enforceable is the CI pipeline.&lt;/p&gt;
&lt;p&gt;Every quality gate from the methodology gets encoded into continuous integration. Coverage thresholds, test evidence requirements, traceability checks, linting rules that enforce architectural boundaries — these run on every pull request, automatically, without human intervention. The pipeline doesn&apos;t care who wrote the code. It validates that the output conforms to the specification and the governing documents.&lt;/p&gt;
&lt;p&gt;This is what keeps execution, tracking, and the AI in sync. You can cut corners, but the quality gates will catch you. A story that skips its test specification fails CI. A pull request that modifies files outside its closed boundary fails CI. Code that doesn&apos;t meet coverage thresholds fails CI. The methodology&apos;s discipline isn&apos;t enforced by willpower or code review — it&apos;s enforced mechanically, on every commit.&lt;/p&gt;
&lt;p&gt;The manifest becomes the contract between planning and execution. When the AI picks up a story, it doesn&apos;t need to make architectural decisions — every decision was already made during layer 2, documented in an ADR, and referenced on the ticket. The AI&apos;s job is translation: take the locked specification and produce code that matches it. And CI verifies that the translation is faithful.&lt;/p&gt;
&lt;p&gt;This is the key insight: &lt;strong&gt;the framework doesn&apos;t make the executor smarter. It makes the executor&apos;s judgment irrelevant to the outcome.&lt;/strong&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;What the Numbers Actually Mean&lt;/h2&gt;
&lt;p&gt;The 95% accuracy rate isn&apos;t the AI getting things right 95% of the time through better reasoning. It&apos;s the AI having nothing to get wrong. When every decision is locked in a governing document, every story has a closed boundary, and every invariant has been adversarially validated, the implementation is a mechanical translation exercise. The 5% that still requires correction is edge cases where the specification was ambiguous in ways the gap analysis didn&apos;t catch — and those corrections feed back into the next pass.&lt;/p&gt;
&lt;p&gt;The 40% correction rate without the framework isn&apos;t the AI being bad at coding. It&apos;s the AI being confident at guessing. Remove the guessing, and the number drops to the floor.&lt;/p&gt;
&lt;p&gt;My time split now is roughly 60% planning (working through the three layers with the AI as a collaborator) and 40% delivery oversight. That 40% isn&apos;t debugging code line by line. It&apos;s reviewing pull requests for shape, reading CI reports, checking test coverage summaries, and confirming that the pipeline&apos;s quality gates passed cleanly. When enough quality is locked into CI, I defer my trust to the reporting of the pipeline jobs rather than having to validate implementation details myself. The planning investment is front-loaded and significant. But the delivery phase is fast, predictable, and the reviews are architectural confirmations rather than defect hunts.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;What This Changes About AI-Augmented Teams&lt;/h2&gt;
&lt;p&gt;Yes, code is cheap. AI can produce it faster than any human team, and the quality floor keeps rising with every model generation. But producing code was never the hard part.&lt;/p&gt;
&lt;p&gt;The hard part is knowing what to build and why. It&apos;s tying together domain-driven design, event storming, user journey mapping, product management, architecture, engineering, testing, and compliance into a coherent system where every decision reinforces every other decision. That cross-discipline integration doesn&apos;t come from a tool. It comes from years of building things, shipping things, breaking things, and learning which decisions cascade and which ones don&apos;t.&lt;/p&gt;
&lt;p&gt;There&apos;s no bootcamp for this. No certificate course. No fast path. It&apos;s the accumulated judgment of someone who has been a product owner, an architect, a quality manager, and a compliance officer — often simultaneously — and who understands how those roles constrain and inform each other. The methodology I&apos;ve described isn&apos;t a process anyone can follow mechanically. It requires someone who can see the gap between a domain specification and a regulatory requirement, between an architectural decision and its downstream impact on sales cycles, between a testing strategy and its evidence value during an audit.&lt;/p&gt;
&lt;p&gt;AI is contextually aware of the now. It can reason about the specification in front of it with remarkable depth. But it doesn&apos;t see how today&apos;s decisions shape the product six months from now. It doesn&apos;t know that shifting regulatory compliance left — building it into the foundation instead of retrofitting it — puts a business at the front of the line in competitive markets where enterprise buyers require certification before the first demo. It doesn&apos;t know that a particular domain modeling decision will make or break a pricing tier, or that deferring a feature creates a dependency that blocks three other features in the next quarter.&lt;/p&gt;
&lt;p&gt;That strategic reasoning — the ability to see the future implications of present decisions — is what the human brings. And it&apos;s not reducible to a prompt.&lt;/p&gt;
&lt;p&gt;The methodology I&apos;ve described is labor-intensive on the front end. You can&apos;t skip the compliance layer, shortcut the adversarial convergence, or hand-wave the closed story boundaries. Each layer produces the inputs the next layer requires, and the traceability between layers is what makes the whole chain auditable — which matters in regulated environments, but also matters in any environment where you want to understand why a decision was made six months from now.&lt;/p&gt;
&lt;p&gt;The pattern is transferable. I&apos;ve run it across three products in different domains, with different tech stacks, different team sizes, and different regulatory requirements. The 40% correction rate without the framework is consistent. The 95% accuracy with it is consistent. The methodology scales because the problem it solves — ambiguity at execution time — is universal.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Human Role Didn&apos;t Shrink. It Clarified.&lt;/h2&gt;
&lt;p&gt;The job of humans in this era has changed. We are no longer responsible for the mundane, repetitive tasks that AI handles better and faster. What we own is design and vision.&lt;/p&gt;
&lt;p&gt;My role shifted entirely to where it has the highest leverage: making decisions during planning, setting architectural direction during domain convergence, and reviewing delivery for intent preservation. I&apos;m not reviewing less. I&apos;m reviewing differently. Instead of reading every line of code to find defects, I&apos;m looking at the shape of what&apos;s delivered, confirming it matches the specification, and trusting the CI pipeline to enforce the details mechanically.&lt;/p&gt;
&lt;p&gt;The AI is a better executor than it is a decision-maker. The methodology I&apos;ve built accepts that constraint and designs around it. Every decision gets made by a human with domain context, documented with traceability, and locked before execution begins. The AI then does what it&apos;s actually good at: translating explicit instructions into consistent output at speed. And the CI pipeline validates every translation, every time, without human intervention.&lt;/p&gt;
&lt;p&gt;That&apos;s the unlock. Not better AI. Better inputs to AI — created by humans who understand what they&apos;re building and why it matters.&lt;/p&gt;
</content:encoded></item><item><title>Introducing Moment and Facet: Your Domain Model Deserves to Run Before Your Code Does</title><link>https://blog.coada.dev/introducing-moment-and-facet-your-domain-model-deserves-to-run-before-your-code-does/</link><guid isPermaLink="true">https://blog.coada.dev/introducing-moment-and-facet-your-domain-model-deserves-to-run-before-your-code-does/</guid><description>You changed a field. Three downstream services broke in production. Nobody saw it coming — not the type checker, not the tests, not the code review. The contract between your service and theirs was ne</description><pubDate>Sun, 05 Apr 2026 00:03:30 GMT</pubDate><content:encoded>&lt;p&gt;You changed a field. Three downstream services broke in production. Nobody saw it coming — not the type checker, not the tests, not the code review. The contract between your service and theirs was never written down, never tested, and never visible to anyone. It lived in tribal knowledge and hope. And it broke on a Tuesday.&lt;/p&gt;
&lt;p&gt;If you&apos;ve built distributed systems, you have a version of this story. Maybe it was a renamed event field. Maybe it was a new required property that one consumer expected and another didn&apos;t. Maybe it was a multi-step process that spanned three services and silently stopped completing because the second one changed its response shape. The details differ. The shape is always the same: &lt;em&gt;the failure lived in the space between services, where nobody was looking.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;I&apos;ve been building software for over twenty years — Fortune 500 companies, startups that didn&apos;t survive, solo consulting where the architecture lived entirely in my head. Across all of that, this is the failure mode that never goes away. Not because engineers are careless. Because there&apos;s no tool that makes the contracts between services visible, testable, and enforceable &lt;em&gt;before&lt;/em&gt; you ship.&lt;/p&gt;
&lt;p&gt;I&apos;ve spent the last year writing about this gap. I named patterns. I documented a methodology. I wrote about what happens when you use AI not for code generation, but for domain modeling — the hard, slow, unglamorous work of deciding what your system &lt;em&gt;means&lt;/em&gt; before deciding how it &lt;em&gt;runs&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;What I didn&apos;t write about is that I was building the thing I couldn&apos;t find.&lt;/p&gt;
&lt;p&gt;Today I&apos;m releasing Moment — an open-source domain specification toolchain — and Facet, its companion visualization platform.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The gap that nobody names&lt;/h2&gt;
&lt;p&gt;Here&apos;s the thing about Domain-Driven Design that nobody talks about at conferences: there&apos;s no specification layer.&lt;/p&gt;
&lt;p&gt;You whiteboard it. You sticky-note it. You argue about bounded context boundaries in a room (or alone, staring at a screen). And then you go write code. There&apos;s nothing in between. No artifact that says &quot;this is the model&quot; in a way that generates your types, your tests, your documentation, and your API contracts. No way to ask &quot;does this event flow actually work?&quot; before you&apos;ve implemented both sides.&lt;/p&gt;
&lt;p&gt;Your domain model lives in seven places by the end of week one: the wiki page that&apos;s already stale, the TypeScript interfaces that diverged on Tuesday, the tests that test the wrong contract, the Markdown doc nobody reads, the API spec that&apos;s three fields behind, the whiteboard photo from last sprint, and someone&apos;s head.&lt;/p&gt;
&lt;p&gt;Six of those will be wrong by next month. You just won&apos;t know which six.&lt;/p&gt;
&lt;p&gt;Evans gave us the vocabulary for &lt;em&gt;where things live&lt;/em&gt; — bounded contexts, aggregates, entities, value objects. That spatial model changed everything about how we think about software. But the vocabulary for &lt;em&gt;how things move through time&lt;/em&gt; — events crossing boundaries, processes orchestrating across contexts, policies reacting to state changes — has no formal home. It lives in conversations that ended months ago and assumptions nobody wrote down.&lt;/p&gt;
&lt;p&gt;That&apos;s the gap. And it&apos;s where every painful production incident I&apos;ve ever debugged actually lived.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Two tools. One workflow.&lt;/h2&gt;
&lt;p&gt;Moment lives in your terminal. Facet lives in your browser. Together they do something that doesn&apos;t exist anywhere else in the DDD ecosystem: they let you &lt;em&gt;run your domain model before you write implementation code&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Moment&lt;/strong&gt; is a specification language and toolchain. You write a &lt;code&gt;.moment&lt;/code&gt; file — one file — that describes your bounded contexts, aggregates, commands, events, and the temporal flows between them. Flows are the key. A flow describes events crossing bounded context boundaries with explicit relationship types and contracts. &lt;code&gt;OrderPlaced&lt;/code&gt; crosses from Ordering to Fulfillment via CustomerSupplier, and the contract says &lt;code&gt;orderId&lt;/code&gt; and &lt;code&gt;items&lt;/code&gt; are required. That crossing has never had a home before. Now it does.&lt;/p&gt;
&lt;p&gt;From that one file, Moment generates everything downstream: typed TypeScript interfaces, BDD scenarios, test scaffolds with crossing assertions, specification documents with Mermaid diagrams, and AsyncAPI 3.0 contracts. One specification, six artifact types, zero drift. Deterministic output — same input, same output, clean git diffs.&lt;/p&gt;
&lt;p&gt;But generation isn&apos;t the revelation. Simulation is.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Facet&lt;/strong&gt; is where your domain model becomes visible. Run &lt;code&gt;moment simulate&lt;/code&gt; and Moment produces synthetic event flows — complete with causation chains, correlation tracking, and branch paths. Open Facet and you &lt;em&gt;see&lt;/em&gt; those flows rendered as interactive timelines. You watch &lt;code&gt;OrderPlaced&lt;/code&gt; cross from Ordering to Fulfillment. You see the causation chain — which event caused which. You see where a contract field is missing, where a process stalls, where a branch path leads somewhere your model didn&apos;t anticipate.&lt;/p&gt;
&lt;p&gt;You see your domain model &lt;em&gt;running&lt;/em&gt;. Before a single aggregate class exists.&lt;/p&gt;
&lt;p&gt;I can&apos;t overstate what this felt like the first time it worked. I&apos;d spent months writing specifications, generating code, trusting that the model was right because the types compiled and the tests passed. Then I opened Facet and watched the events flow through the system I&apos;d described — and I could &lt;em&gt;see&lt;/em&gt; the places where my thinking was incomplete. Not wrong, exactly. Incomplete. Assumptions I&apos;d made about how contexts interact that I&apos;d never been forced to make explicit.&lt;/p&gt;
&lt;p&gt;In every other engineering discipline, you test the design before you build. Civil engineers don&apos;t pour concrete and then check load calculations. Chip designers don&apos;t fabricate and then verify logic. But in domain modeling, we&apos;ve been going from whiteboard to code for twenty years and pretending that&apos;s fine.&lt;/p&gt;
&lt;p&gt;It&apos;s not fine. We just didn&apos;t have the tooling to do better.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;What the specification looks like&lt;/h2&gt;
&lt;p&gt;A &lt;code&gt;.moment&lt;/code&gt; file reads like a domain description, not a programming language:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-plaintext&quot;&gt;context &quot;Ordering&quot; [Core]

  aggregate &quot;Order&quot;
    identity orderId: UUID

    command PlaceOrder
      input customerId: UUID, items: OrderItem[]
      precondition orderNotPlaced: &quot;Order has not already been placed&quot;
      emits OrderPlaced

    event OrderPlaced
      orderId: UUID
      customerId: UUID
      items: OrderItem[]
      placedAt: DateTime

flow &quot;order-placed&quot;
  lane ordering &quot;Ordering&quot; [Core]
  lane fulfillment &quot;Fulfillment&quot; [Supporting]

  moment &quot;Order submission&quot;
    ordering: PlaceOrder
    ordering: OrderPlaced crosses-to fulfillment via CustomerSupplier
      contract
        orderId: UUID [required]
        items: OrderItem[] [required]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;flow&lt;/code&gt; block is what makes this different from every DDD tool that&apos;s come before. It encodes the temporal dimension — the thing Evans didn&apos;t formalize, the thing Brandolini&apos;s sticky notes gesture at but can&apos;t enforce. The crossing contract that broke on a Tuesday? This is where it lives now.&lt;/p&gt;
&lt;p&gt;I won&apos;t walk through the full pipeline here — the &lt;a href=&quot;https://moment.mmmnt.dev&quot;&gt;site&lt;/a&gt; has a seven-tab interactive example showing the generated TypeScript, Gherkin, test scaffolds, spec docs, AsyncAPI, and simulation output from a single &lt;code&gt;.moment&lt;/code&gt; file. Go look. It&apos;s the thing I&apos;m proudest of on that page.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Where this came from&lt;/h2&gt;
&lt;p&gt;I&apos;m a solo founder. No team. No co-founder. Just me and an AI partner doing the kind of knowledge crunching that Evans described in 2003 — except instead of a room full of domain experts, it&apos;s one architect with twenty years of scar tissue and a language model that never forgets a bounded context boundary.&lt;/p&gt;
&lt;p&gt;That practice produced &lt;a href=&quot;https://sdd.mmmnt.dev&quot;&gt;Signal-Driven Development&lt;/a&gt; — a three-pass convergence methodology where you model, run a gap report, resolve gaps, and iterate until the gap count hits zero. Zero gaps is the definition of done that DDD never had. I &lt;a href=&quot;https://listenrightmeow.hashnode.dev/introducing-signal-driven-development&quot;&gt;wrote about it&lt;/a&gt;. I &lt;a href=&quot;https://github.com/listenrightmeow/signal-driven-development&quot;&gt;built the templates&lt;/a&gt;. I applied it to every product in my ecosystem, across hundreds of ADRs and thousands of domain decisions.&lt;/p&gt;
&lt;p&gt;And through all of it, one realization kept growing: the spatial model isn&apos;t enough. DDD gives you a world-class vocabulary for &lt;em&gt;structure&lt;/em&gt; — what lives where, what owns what, what depends on what. But it gives you almost nothing for &lt;em&gt;motion&lt;/em&gt; — how events flow through that structure over time, what contracts govern the crossings, what happens when a process spans three contexts and the second one fails.&lt;/p&gt;
&lt;p&gt;That&apos;s what Moment formalizes. And Facet makes it visible.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The ecosystem&lt;/h2&gt;
&lt;p&gt;Moment and Facet are part of a larger toolchain called Complai. The full workflow:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Sift&lt;/strong&gt; discovers domains — bounded contexts, aggregates, commands, events — through AI-mediated knowledge crunching. It publishes structured domain events.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Moment&lt;/strong&gt; takes those building blocks and adds temporal scope — flows, crossings, and contracts. It generates typed implementations and simulation scenarios.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Facet&lt;/strong&gt; visualizes those scenarios — interactive timelines, causation chains, crossing contracts, branch paths. It&apos;s where you see your model run.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Forge&lt;/strong&gt; (coming soon) bootstraps project structure from Moment&apos;s typed artifacts.&lt;/p&gt;
&lt;p&gt;Every event flowing between these tools uses a shared envelope format. Each tool can be used independently — Moment doesn&apos;t require Sift, Facet doesn&apos;t require an account for local simulation. The full &lt;a href=&quot;https://moment.mmmnt.dev/ecosystem&quot;&gt;ecosystem page&lt;/a&gt; shows how they connect.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Current state&lt;/h2&gt;
&lt;p&gt;Moment is functionally complete. Ten packages published on npm under &lt;code&gt;@mmmnt/*&lt;/code&gt;. The full pipeline works end-to-end: parse, derive, generate, emit. Schema governance, drift detection, simulation, MCP server — all shipped.&lt;/p&gt;
&lt;p&gt;Facet is live at &lt;a href=&quot;https://facet.mmmnt.dev&quot;&gt;facet.mmmnt.dev&lt;/a&gt;.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Try it&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;git clone https://github.com/mmmnt/mmmnt.git
cd mmmnt &amp;amp;&amp;amp; pnpm install &amp;amp;&amp;amp; pnpm turbo build
moment parse spec.moment
moment derive
moment generate --all
moment simulate
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then open the simulation output in Facet and watch your domain model run.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Moment&lt;/strong&gt;: &lt;a href=&quot;https://moment.mmmnt.dev&quot;&gt;moment.mmmnt.dev&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Facet&lt;/strong&gt;: &lt;a href=&quot;https://facet.mmmnt.dev&quot;&gt;facet.mmmnt.dev&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href=&quot;https://github.com/mmmnt/mmmnt&quot;&gt;github.com/mmmnt/mmmnt&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Discord&lt;/strong&gt;: &lt;a href=&quot;https://discord.gg/YcRqsQUQuu&quot;&gt;discord.gg/YcRqsQUQuu&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SDD Methodology&lt;/strong&gt;: &lt;a href=&quot;https://sdd.mmmnt.dev&quot;&gt;sdd.mmmnt.dev&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;p&gt;Moment stands on the shoulders of the DDD community — Evans, Young, Brandolini, Vernon, Tune, Khononov — and encodes their patterns into a toolchain that enforces what books can only recommend. And a thanks to Hatoum for &lt;a href=&quot;https://narrativedriven.org/&quot;&gt;NDD&lt;/a&gt;, and introducing me to DDD 11 years ago.&lt;/p&gt;
&lt;p&gt;I built the thing I needed. I think you might need it too.&lt;/p&gt;
</content:encoded></item><item><title>Event Storming Is a Language, Not a Workshop</title><link>https://blog.coada.dev/event-storming-is-a-language-not-a-workshop/</link><guid isPermaLink="true">https://blog.coada.dev/event-storming-is-a-language-not-a-workshop/</guid><description>The DDD community treats Event Storming as a workshop activity. You book a room. You buy sticky notes — orange for domain events, blue for commands, yellow for aggregates, purple for policies, pink fo</description><pubDate>Mon, 30 Mar 2026 03:37:15 GMT</pubDate><content:encoded>&lt;p&gt;The DDD community treats Event Storming as a workshop activity. You book a room. You buy sticky notes — orange for domain events, blue for commands, yellow for aggregates, purple for policies, pink for hotspots. You invite the domain experts, the developers, the product managers. You fill a wall. You photograph it. You translate it into Jira tickets and architecture diagrams. Then the sticky notes come down and the wall goes back to being a wall.&lt;/p&gt;
&lt;p&gt;That&apos;s how Event Storming is taught, practiced, and discussed. As a facilitated session with a physical output that exists for as long as the adhesive holds.&lt;/p&gt;
&lt;p&gt;I think this framing buries the most important thing Brandolini created. And I think the evidence shows that what he created is bigger than the workshop format can contain.&lt;/p&gt;
&lt;hr /&gt;
&lt;img src=&quot;https://cdn.hashnode.com/uploads/covers/69ae573286766ac3a67a2e78/458ce5b5-86b7-457e-a361-35bb1614c508.jpg&quot; alt=&quot;&quot; style=&quot;display:block;margin:0 auto&quot; /&gt;

&lt;p&gt;Start with the sticky note colors. They aren&apos;t a workshop convenience. They&apos;re a type system.&lt;/p&gt;
&lt;p&gt;Orange means &quot;something happened&quot; — a domain event. Blue means &quot;someone or something wants to cause a change&quot; — a command. Yellow means &quot;this thing owns state and enforces rules&quot; — an aggregate. Purple means &quot;when this happens, do that&quot; — a policy. Pink means &quot;we don&apos;t understand this yet&quot; — a hotspot. Lilac means &quot;this is derived data&quot; — a read model.&lt;/p&gt;
&lt;p&gt;Each color is a semantic classification. Each classification carries behavioral expectations. An orange sticky doesn&apos;t just label a fact — it declares that something irreversible occurred in the domain, that downstream consequences may follow, and that the system&apos;s state changed in a way that matters to the business. A purple sticky doesn&apos;t just describe a reaction — it declares a stateless causal link between an event and a command, a temporal dependency that governs how the system behaves over time.&lt;/p&gt;
&lt;p&gt;These aren&apos;t categories you impose on a domain. They&apos;re structural roles that domain concepts naturally fill. Every domain has things that happen (events), things that cause change (commands), things that own state (aggregates), and things that react to change (policies). Brandolini didn&apos;t invent these roles. He gave them colors.&lt;/p&gt;
&lt;p&gt;The question is whether the colors — the vocabulary — are bound to the workshop format, or whether they describe something about reactive systems that exists independently of sticky notes on a wall.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Here&apos;s the argument that they do.&lt;/p&gt;
&lt;p&gt;Take any reactive domain and model how it behaves over time. Not what it &lt;em&gt;is&lt;/em&gt; — what it &lt;em&gt;does&lt;/em&gt;. Follow a single business process from trigger to completion.&lt;/p&gt;
&lt;p&gt;A customer places an order. That&apos;s a domain event — something happened. The system needs to reserve inventory. That&apos;s a policy — when an order is placed, initiate reservation. The policy produces a command — ReserveInventory. The command executes against the Inventory aggregate, which checks stock levels, enforces the invariant that reserved quantity cannot exceed available quantity, and emits a new event — StockReserved. That event triggers another policy — when stock is reserved, initiate payment capture. Payment capture produces a command against the Payment aggregate. The Payment aggregate processes it and emits PaymentCaptured. Which triggers the fulfillment policy. Which produces the CreateShipment command. Which executes against the Shipment aggregate. Which emits ShipmentCreated.&lt;/p&gt;
&lt;p&gt;Event → Policy → Command → Aggregate → Event → Policy → Command → Aggregate → Event.&lt;/p&gt;
&lt;p&gt;Orange → Purple → Blue → Yellow → Orange → Purple → Blue → Yellow → Orange.&lt;/p&gt;
&lt;img src=&quot;https://cdn.hashnode.com/uploads/covers/69ae573286766ac3a67a2e78/4178d074-5cf3-4d62-9a55-fb995bad6f46.jpg&quot; alt=&quot;&quot; style=&quot;display:block;margin:0 auto&quot; /&gt;

&lt;p&gt;I didn&apos;t import Event Storming to produce that decomposition. I described what happens in the domain over time, and the vocabulary appeared. The events are events because irreversible things happened. The policies are policies because stateless reactions connect those events to actions. The commands are commands because something needs to change state. The aggregates are aggregates because something owns the state being changed and enforces the rules about how it changes.&lt;/p&gt;
&lt;p&gt;The reactive flow isn&apos;t a framework. It&apos;s how temporal causality decomposes in bounded systems. Brandolini&apos;s colors are a notation for a structure that already exists in every reactive domain.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Now go further. Look at what happens at the boundaries.&lt;/p&gt;
&lt;p&gt;The OrderPlaced event was emitted by the Order aggregate in the Order bounded context. But the InventoryReservation policy lives in the Inventory context. That event just crossed a context boundary. The policy that consumes it doesn&apos;t know anything about the Order aggregate&apos;s internals — it only knows that something happened and it needs to react.&lt;/p&gt;
&lt;p&gt;This is where Evans&apos; Anti-Corruption Layer meets Brandolini&apos;s reactive flow. The context boundary is the place where the event&apos;s semantics must be translated — where OrderPlaced in the Order context becomes a signal that the Inventory context interprets through its own ubiquitous language. The policy is the translation mechanism. It sits at the boundary, consumes a foreign event, and produces a command in the local context&apos;s vocabulary.&lt;/p&gt;
&lt;img src=&quot;https://cdn.hashnode.com/uploads/covers/69ae573286766ac3a67a2e78/fd83250c-7b25-498c-a1b2-b14f09b7e79a.jpg&quot; alt=&quot;&quot; style=&quot;display:block;margin:0 auto&quot; /&gt;

&lt;p&gt;Brandolini&apos;s purple sticky — the policy — is doing the same work as Evans&apos; ACL. It&apos;s the boundary translation layer for reactive flows. This isn&apos;t a metaphor. It&apos;s a structural equivalence. The policy sticky note is the temporal version of the spatial ACL. Evans described how bounded contexts protect their internal models from external semantics. Brandolini described how bounded contexts react to external events through local translations. Same architectural function, different dimension — space versus time.&lt;/p&gt;
&lt;p&gt;Nobody in the DDD literature has formalized this equivalence. Evans writes about spatial boundaries. Brandolini writes about temporal flows. The two vocabularies coexist in the community without anyone pointing out that they&apos;re describing the same boundary mediation from different angles.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;The shift that made this visible to me came from an unexpected place.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://narrativedriven.org/&quot;&gt;Narrative Driven Development&lt;/a&gt; — Sam&apos;s work — introduced a framing that reoriented how I think about domain events entirely. NDD treats domain events not as data payloads or state transitions, but as &lt;em&gt;moments in time&lt;/em&gt;. A domain event isn&apos;t just a record that something changed. It&apos;s a moment with temporal significance — it happened at a point in time, in a sequence, with causal relationships to what came before and what comes after.&lt;/p&gt;
&lt;p&gt;This sounds like a nuance. It isn&apos;t. It&apos;s a reorientation of the entire modeling perspective.&lt;/p&gt;
&lt;p&gt;When you think of events as state changes, you model them as facts about aggregates. &quot;The order&apos;s status changed to Placed.&quot; That&apos;s spatial — it describes the aggregate&apos;s state at a point. When you think of events as moments in time, you model them as points in a causal sequence. &quot;An order was placed, which means inventory must be checked, which means payment must be captured, which means fulfillment must begin.&quot; That&apos;s temporal — it describes what unfolds across the system as a consequence.&lt;/p&gt;
&lt;p&gt;The spatial framing keeps you inside the aggregate. The temporal framing pulls you across boundaries, through policies, into the reactive flow that connects the entire domain. And the moment you model with temporal scope — events crossing boundaries over time — Event Storming&apos;s swim lane vocabulary isn&apos;t something you choose to use. It&apos;s the structure your model produces.&lt;/p&gt;
&lt;p&gt;NDD planted the seed. Months of building grew it into something I couldn&apos;t ignore: the reactive flow decomposition isn&apos;t Brandolini&apos;s invention. It&apos;s a property of temporal domain modeling that Brandolini&apos;s workshop makes visible. The workshop is one discovery mechanism. Temporal modeling is another. They converge on the same structure because the structure is in the domain, not in the method.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Here&apos;s where this gets practically important and stops being philosophical.&lt;/p&gt;
&lt;img src=&quot;https://cdn.hashnode.com/uploads/covers/69ae573286766ac3a67a2e78/f9534854-1431-432d-91d5-49bfa9255bfb.jpg&quot; alt=&quot;&quot; style=&quot;display:block;margin:0 auto&quot; /&gt;

&lt;p&gt;If Event Storming&apos;s vocabulary is a discovered structure rather than an invented method, then access to that structure doesn&apos;t require the workshop. This matters because the workshop has prerequisites that exclude a large part of the DDD community.&lt;/p&gt;
&lt;p&gt;You need a room. You need domain experts who can dedicate half a day or more. You need a facilitator skilled enough to keep the session productive. You need developers and product people in the same physical or virtual space. You need the organizational authority to pull all of these people out of their day jobs simultaneously.&lt;/p&gt;
&lt;p&gt;Solo practitioners don&apos;t have this. Small teams don&apos;t have this. Engineers at companies where domain experts are executives who won&apos;t sit in a four-hour workshop don&apos;t have this. The workshop format creates an accessibility barrier that locks out precisely the people who would benefit most from the vocabulary.&lt;/p&gt;
&lt;p&gt;But the vocabulary works without the workshop. You can model a domain&apos;s temporal behavior alone — event by event, policy by policy, command by command — and arrive at the same reactive decomposition that a room full of sticky notes would produce. Not because you&apos;re doing Event Storming without the workshop. Because you&apos;re modeling temporal causality, and the structure is the same regardless of how you discover it.&lt;/p&gt;
&lt;p&gt;The workshop is valuable. Brandolini&apos;s facilitation techniques surface domain knowledge from non-technical experts in ways that nothing else matches. The physical, collaborative, high-energy format produces insights that solo modeling rarely does. If you can run a workshop, run one.&lt;/p&gt;
&lt;p&gt;But stop telling people that Event Storming requires the workshop. It doesn&apos;t. The workshop requires Event Storming&apos;s vocabulary. The vocabulary doesn&apos;t require the workshop. The distinction matters because thousands of practitioners are locked out of the method by logistics while the language sits there, available, waiting to be used.&lt;/p&gt;
&lt;hr /&gt;
&lt;img src=&quot;https://cdn.hashnode.com/uploads/covers/69ae573286766ac3a67a2e78/816ee20b-b39c-4d24-9e0d-ca5254d7c9e4.png&quot; alt=&quot;&quot; style=&quot;display:block;margin:0 auto&quot; /&gt;

&lt;p&gt;There&apos;s a deeper point here about what Brandolini actually contributed to DDD.&lt;/p&gt;
&lt;p&gt;Evans gave us the spatial vocabulary — bounded contexts, aggregates, entities, value objects. The nouns of domain modeling. The things the system &lt;em&gt;is&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Brandolini gave us the temporal vocabulary — events, commands, policies, read models. The verbs of domain modeling. The things the system &lt;em&gt;does&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;These aren&apos;t competing frameworks. They&apos;re complementary dimensions of the same domain. Evans models the space. Brandolini models the time. Together, they give you a complete vocabulary for describing what a system is, what it does, how it reacts, and how it changes.&lt;/p&gt;
&lt;p&gt;But the community treats Evans as the foundation and Brandolini as a workshop technique. That&apos;s wrong. Brandolini&apos;s vocabulary is as foundational as Evans&apos;. It&apos;s just been trapped inside a workshop format that makes it look like a facilitation tool instead of what it actually is — the temporal half of domain-driven design.&lt;/p&gt;
&lt;p&gt;Event Storming isn&apos;t a workshop. It&apos;s a language. The workshop is one way to speak it.&lt;/p&gt;
&lt;p&gt;It&apos;s not the only way.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;em&gt;This is an independent post — not part of the Signal-Driven Development series. More at&lt;/em&gt; &lt;a href=&quot;https://listenrightmeow.hashnode.dev&quot;&gt;&lt;em&gt;listenrightmeow.hashnode.dev&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;
</content:encoded></item><item><title>AI Made My Architecture Deeper, Not Simpler</title><link>https://blog.coada.dev/ai-made-my-architecture-deeper-not-simpler/</link><guid isPermaLink="true">https://blog.coada.dev/ai-made-my-architecture-deeper-not-simpler/</guid><description>The pitch is always simplification. AI will make development faster, architectures leaner, systems easier. Fewer decisions, fewer layers, fewer meetings. Point the model at your domain, let it generat</description><pubDate>Sun, 29 Mar 2026 23:40:25 GMT</pubDate><content:encoded>&lt;p&gt;The pitch is always simplification. AI will make development faster, architectures leaner, systems easier. Fewer decisions, fewer layers, fewer meetings. Point the model at your domain, let it generate the structure, ship it.&lt;/p&gt;
&lt;p&gt;That&apos;s not what happened.&lt;/p&gt;
&lt;p&gt;What happened is the opposite. Working with AI as a domain modeling partner over months of sustained architectural work produced a system with more layers, more explicit decisions, more structural depth than anything I&apos;ve built with a team. Not because the AI over-engineered it. Because the AI never let me skip the parts that teams routinely skip.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Here&apos;s what I mean by &quot;skip.&quot;&lt;/p&gt;
&lt;p&gt;Every architect has done this: you&apos;re modeling a domain, you hit a decision that requires careful analysis, and you defer it. Not intentionally. You just move on because the next bounded context is more interesting, or the aggregate you&apos;re working on feels close enough, or the team is tired and somebody says &quot;we&apos;ll clean this up later.&quot; The decision doesn&apos;t get made. It doesn&apos;t get documented. It becomes an implicit assumption embedded in the codebase, invisible until something breaks in production and someone traces the failure back to a modeling choice that was never actually a choice — it was an absence.&lt;/p&gt;
&lt;p&gt;Teams do this constantly. Not because they&apos;re careless, but because collaborative modeling has natural momentum. Discussions move forward. Whiteboards get erased. The facilitator keeps the energy up. And the hard, awkward, slow decisions — the ones where you sit with ambiguity for twenty minutes before the right boundary reveals itself — those are exactly the decisions that group dynamics conspire to skip.&lt;/p&gt;
&lt;p&gt;AI doesn&apos;t have group dynamics. It doesn&apos;t get tired of your domain model. It doesn&apos;t lose context from the session three weeks ago. And when you move past a decision without resolving it, it doesn&apos;t politely let it go. You can try to skip ahead, but the model holds. Every unresolved ambiguity sits there, waiting, until you deal with it.&lt;/p&gt;
&lt;p&gt;That&apos;s not simplification. That&apos;s a forcing function for depth.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Two patterns emerged from this work that I didn&apos;t anticipate.&lt;/p&gt;
&lt;p&gt;The first was the Candidate Lifecycle — the realization that when AI generates domain artifacts, every output must enter as a candidate, not as confirmed knowledge. In a team setting, the knowledge crunching process embeds trust implicitly. The domain expert says &quot;we process refunds within 30 days,&quot; the team discusses it, the model absorbs it. Nobody stamps it &quot;confirmed.&quot; The confirmation was the conversation itself.&lt;/p&gt;
&lt;p&gt;AI doesn&apos;t participate in that conversation. It generates interpretations — some right, some close, some subtly wrong in ways that look right. Without an explicit confirmation boundary, those interpretations flow into the model as if they were validated domain knowledge. They weren&apos;t. They were candidates that nobody reviewed because the process assumed implicit trust that no longer existed.&lt;/p&gt;
&lt;p&gt;I didn&apos;t design the Candidate Lifecycle because I read a paper about AI trust boundaries. I designed it because I watched interpretations enter the model unchallenged and realized the model was accumulating confident noise. The pattern wasn&apos;t planned. It was forced — the architecture demanded it because the alternative was a domain model built on unverified assumptions.&lt;/p&gt;
&lt;p&gt;The second was the Classification Gap — the discovery that a semantically wrong but structurally valid classification produces a model that passes every check. A reactive behavior misclassified as a static constraint looks perfect on paper. The gap report says nothing. The heuristics are healthy. And the architecture is missing an entire reactive path because the concept that would have anchored it was placed in the wrong category.&lt;/p&gt;
&lt;p&gt;I didn&apos;t discover the Classification Gap through a literature review. I discovered it because I was staring at a specification that looked clean but felt wrong. Something was missing from the event flows, but the gap report insisted the model was converged. Tracing it back led to a single concept — a Policy classified as an Invariant — that had silently collapsed a temporal behavior into a spatial constraint. Structurally valid. Semantically backwards. Invisible to every automated check.&lt;/p&gt;
&lt;p&gt;Both of these patterns — the Candidate Lifecycle and the Classification Gap — are things a team would eventually discover too. In production. After the architecture hardens around the wrong assumptions. After the missing reactive path causes a failure that nobody can trace to a modeling decision because the decision was never visible in the first place.&lt;/p&gt;
&lt;p&gt;AI surfaced them during modeling. Not after deployment. Not during code review. During the part of the process where they were still cheap to fix.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;This is the counterintuitive result: AI partnership doesn&apos;t produce simpler architectures. It produces more honest ones.&lt;/p&gt;
&lt;p&gt;Every decision that a team would have deferred, the AI partnership forced. Every ambiguity that would have been papered over with &quot;we&apos;ll figure it out during implementation,&quot; the modeling process surfaced and demanded resolution. Every classification that would have gone unchallenged because the team was moving fast and the model looked clean enough — each one became a decision point with explicit confirmation or rejection.&lt;/p&gt;
&lt;p&gt;The architecture got deeper because every layer exists for a reason that was articulated during design, not discovered during debugging. The provenance chains exist because I realized AI interpretations needed auditable lineage. The confirmation boundaries exist because I watched unverified outputs accumulate. The classification verification exists because I found the invisible error that passes every test.&lt;/p&gt;
&lt;p&gt;None of these layers were planned in advance. None of them came from an architecture diagram drawn before writing code. They emerged from the sustained pressure of building with a partner that has perfect recall, infinite patience, and no incentive to skip the hard parts.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;I want to be precise about what I&apos;m not saying.&lt;/p&gt;
&lt;p&gt;I&apos;m not saying AI is a better collaborator than humans. A domain expert who knows the business deeply, who can say &quot;no, that&apos;s not how we process claims&quot; with the authority of ten years of operational experience — that person is irreplaceable. AI doesn&apos;t have domain expertise. It has pattern recognition and recall.&lt;/p&gt;
&lt;p&gt;I&apos;m not saying teams produce shallow architectures. Teams with strong architects, good facilitation, and genuine domain expert participation produce extraordinary systems. The best architectures I&apos;ve seen were built by teams, not solo practitioners.&lt;/p&gt;
&lt;p&gt;What I&apos;m saying is that the specific mechanics of sustained AI partnership produce architectural depth in places that other processes don&apos;t reach. Let me be concrete about why.&lt;/p&gt;
&lt;p&gt;A human collaborator holds maybe three or four bounded contexts in their head at once. By the fifth, they&apos;re referencing notes from earlier sessions, reconstructing context, losing the fine-grained relationships between aggregates in Context A and the policies they trigger in Context C. That&apos;s not a criticism — it&apos;s a cognitive limit. AI holds the entire specification in working memory. All six bounded contexts, all nine aggregates, all fifty-four domain events, all the invariant relationships between them, simultaneously. When you say &quot;does this new policy in the Order context conflict with the fulfillment saga we defined three weeks ago,&quot; the answer comes back with the specific saga steps, the events it subscribes to, and the exact point where the new policy would create a race condition. No notes. No reconstruction. Immediate structural reasoning across the full model.&lt;/p&gt;
&lt;p&gt;That changes what questions you can ask. You stop simplifying your questions to fit your collaborator&apos;s context window and start asking the hard cross-cutting questions that would take a team twenty minutes of whiteboard reconstruction. The architecture gets deeper because the questions get harder.&lt;/p&gt;
&lt;p&gt;Then there&apos;s adversarial reasoning. I can ask my AI collaborator to argue against its own recommendation. Not as a performance — genuinely. &quot;You just suggested this aggregate boundary. Now tell me why it&apos;s wrong. What would Vernon say? Where does this violate the heuristic thresholds we set in the last pass?&quot; And the challenge comes back with structural reasoning, not ego. No defensiveness. No anchoring to its own prior suggestion. It will tear down something it built ten minutes ago if the reasoning demands it. Try asking a human architect to genuinely attack their own design decision in the same session they made it. The social dynamics make it almost impossible. The AI has no social dynamics. The adversarial review is clean.&lt;/p&gt;
&lt;p&gt;There&apos;s the forcing function of articulation. You cannot hand-wave past a decision with an AI partner. With a team, you can say &quot;this aggregate handles order state&quot; and everyone nods because they roughly know what you mean. With AI, that statement generates follow-up: which state transitions? What commands mutate it? What invariants constrain those transitions? What events are emitted? If you can&apos;t answer, the model has a gap, and the gap is visible immediately — not six weeks later when someone tries to implement it and discovers the aggregate&apos;s responsibilities were never actually defined.&lt;/p&gt;
&lt;p&gt;And there&apos;s the absence of sunk cost. Human teams invest identity in architectural decisions. The person who championed the microservice boundary feels ownership of it. Challenging it three months later isn&apos;t just a technical discussion — it&apos;s a social one. AI has zero investment in prior decisions. When the third pass reveals that a boundary drawn in the first pass was wrong, the restructuring is purely analytical. No politics. No hurt feelings. No &quot;but we already built the infrastructure for this.&quot; The architecture evolves based on what the domain demands, not what the team has already committed to.&lt;/p&gt;
&lt;p&gt;None of this replaces the domain expert. None of it replaces the team that debates trade-offs over lunch and builds shared understanding through years of working together. Those things produce their own kind of depth — the kind that comes from lived operational experience and collective intuition.&lt;/p&gt;
&lt;p&gt;But this combination — one architect, full model context, adversarial reasoning without ego, forced articulation, zero sunk cost — changed how I will architect for the rest of my career. Not because AI is smarter than the teams I&apos;ve worked with. Because it operates in a fundamentally different mode that reaches places those teams structurally couldn&apos;t, for reasons that have nothing to do with intelligence and everything to do with mechanics.&lt;/p&gt;
&lt;p&gt;The complexity wasn&apos;t added. It was always in the domain, waiting for a process that wouldn&apos;t let me look away.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;em&gt;Next: If AI introduces failure modes that structural analysis can&apos;t catch and trust boundaries that traditional DDD never needed, what&apos;s the architectural response? The same one Evans gave us for legacy systems — but applied to intelligence instead of data.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;This is Post 8 of a 17-post series on Signal-Driven Development — a solo-practitioner DDD methodology built with AI.&lt;/em&gt; &lt;a href=&quot;link&quot;&gt;&lt;em&gt;Post 7: The Classification Gap&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;
</content:encoded></item><item><title>The Classification Gap: The Bug That Passes Every Test</title><link>https://blog.coada.dev/the-classification-gap-the-bug-that-passes-every-test/</link><guid isPermaLink="true">https://blog.coada.dev/the-classification-gap-the-bug-that-passes-every-test/</guid><description>There&apos;s a class of domain modeling error that no test catches. No structural analysis flags it. No linter complains. Your gap report comes back clean. Your aggregate has the right number of invariants</description><pubDate>Sun, 29 Mar 2026 22:47:38 GMT</pubDate><content:encoded>&lt;p&gt;There&apos;s a class of domain modeling error that no test catches. No structural analysis flags it. No linter complains. Your gap report comes back clean. Your aggregate has the right number of invariants, your bounded context has healthy ratios, your event flows trace end to end.&lt;/p&gt;
&lt;p&gt;The model is wrong.&lt;/p&gt;
&lt;p&gt;Not wrong like a missing field or a misspelled event name. Wrong like a reactive behavior modeled as a static constraint. Wrong like a Policy collapsed into an Invariant — structurally valid, semantically backwards, invisible to every check you run against it.&lt;/p&gt;
&lt;p&gt;I call this the Classification Gap. And if you&apos;re using AI to assist with domain modeling — even informally, even just bouncing ideas off ChatGPT — you will hit it. You probably already have. You just couldn&apos;t name it.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Here&apos;s the example that made it concrete for me.&lt;/p&gt;
&lt;p&gt;An architect describes a business rule: &lt;em&gt;&quot;When an order is placed, check inventory and reserve stock.&quot;&lt;/em&gt; That&apos;s a reactive behavior. Something happens — an order is placed. In response, the system does something — checks inventory, reserves stock. In DDD vocabulary, this is a Policy. A stateless reaction to a domain event that produces a downstream command. The triggering event and the resulting action are the whole point.&lt;/p&gt;
&lt;p&gt;Now imagine the AI classifies it as an Invariant instead. An Invariant is a static constraint on an aggregate — a rule that must always be true. &quot;An order cannot exceed the customer&apos;s credit limit.&quot; &quot;A shipment must have at least one line item.&quot; These are structural truths about the domain. They don&apos;t react to events. They don&apos;t trigger commands. They just... hold.&lt;/p&gt;
&lt;p&gt;The misclassification looks fine. The Invariant has a name, a description, an owning aggregate. The aggregate now has one more invariant — which actually improves its invariant-to-command ratio. If you&apos;re running structural completeness checks, every box is ticked. If you&apos;re evaluating heuristic thresholds, the numbers look healthier than before.&lt;/p&gt;
&lt;p&gt;The gap report says nothing.&lt;/p&gt;
&lt;p&gt;But you&apos;ve just collapsed a temporal behavior into a spatial constraint. You&apos;ve taken something that fires when an event crosses a boundary and turned it into something that sits inside an aggregate doing nothing. The reactive path — the entire causal chain from &quot;order placed&quot; to &quot;stock reserved&quot; — is gone from the model. Not broken. Not misconfigured. Just absent, because the concept that would have anchored it was placed in the wrong category.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;In traditional DDD, this mistake is self-correcting. A human modeler who classifies a reactive behavior as a static constraint will feel the friction at implementation time. They&apos;ll try to write the code and something won&apos;t fit. The Invariant doesn&apos;t express what they meant. The code becomes awkward. The feedback loop is the implementation itself — the discomfort of writing code that fights the model is the signal that the model is wrong.&lt;/p&gt;
&lt;p&gt;Evans never had to name this problem because the modeling medium — human conversation, whiteboard sessions, iterative refinement through code — naturally surfaces it. The human modeler carries the semantic intent in their head. When the model diverges from that intent, they feel it. They may not articulate it as &quot;I misclassified a Policy as an Invariant,&quot; but they&apos;ll say &quot;this doesn&apos;t feel right&quot; and restructure.&lt;/p&gt;
&lt;p&gt;AI doesn&apos;t feel anything.&lt;/p&gt;
&lt;p&gt;When an AI classifies a concept into a building block type, it commits with confidence. The classification is linguistically coherent. The output is well-structured. And the moment it enters your domain model, the original semantic intent — the reactive behavior, the temporal causality, the &quot;when X happens, do Y&quot; — is gone. What remains is a structurally valid Invariant with no trace of what it was supposed to be.&lt;/p&gt;
&lt;p&gt;The feedback loop is broken. Not degraded, not delayed — broken. Because every downstream check evaluates the classified output, not the original intent. Structural analysis sees a valid Invariant. Heuristic evaluation sees healthy ratios. The gap report confirms convergence. The system has produced an internally consistent model built on a wrong foundation, and nothing in the verification pipeline can see it.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;This is what makes the Classification Gap different from every other modeling error. Missing elements get caught by structural completeness checks. Threshold violations get caught by heuristic analysis. Naming inconsistencies get caught by convention rules. Every other category of error leaves a visible trace — a gap in the report, a metric outside bounds, a rule that fires.&lt;/p&gt;
&lt;p&gt;The Classification Gap leaves no trace because the model is complete. The error isn&apos;t in what&apos;s missing. It&apos;s in what&apos;s present but miscategorized.&lt;/p&gt;
&lt;p&gt;And the most susceptible boundary — Policy versus Invariant — is also the most architecturally consequential. It&apos;s the boundary between time and space in your domain model. Between something that reacts to events and something that constrains state. Get it wrong and you don&apos;t just have a cosmetic error in your specification. You have a model that will produce an architecture without the reactive paths your domain requires. The event flows won&apos;t be designed because the concepts that anchor them were never modeled as reactive. The sagas won&apos;t be triggered because the policies that initiate them don&apos;t exist.&lt;/p&gt;
&lt;p&gt;The system will work. It will pass tests. And it will be fundamentally wrong about how the domain behaves over time.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;This isn&apos;t just a Policy-versus-Invariant problem, though that&apos;s the sharpest edge. Every classification boundary in DDD carries some version of this risk.&lt;/p&gt;
&lt;p&gt;A Command misclassified as a Domain Event inverts the causal direction — intentions become facts, and the model&apos;s sense of what requests action versus what records completion flips. An Aggregate misclassified as a Domain Service loses its state boundary — the invariant enforcement surface disappears, and the consistency guarantee with it. A Policy over-promoted to a Saga gains compensation logic it doesn&apos;t need, adding architectural complexity for behavior that should be fire-and-forget.&lt;/p&gt;
&lt;p&gt;Each of these is structurally valid. Each passes every check. Each produces a different architecture than the domain actually requires.&lt;/p&gt;
&lt;p&gt;The common thread is that building block type is a semantic decision, not a structural one. It encodes what a concept &lt;em&gt;does&lt;/em&gt; in the domain — how it relates to events, state, time, and causality. Structural analysis can verify that the pieces fit together. It cannot verify that the pieces are the right kind.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;If you&apos;re using AI for domain modeling today — even as a thinking partner, even just for brainstorming bounded contexts or sketching event flows — the Classification Gap is already in your process. The question isn&apos;t whether it will happen. The question is whether you&apos;ll catch it before the architecture hardens around it.&lt;/p&gt;
&lt;p&gt;Traditional DDD never needed a name for this because the detection mechanism was embedded in the human modeler&apos;s discomfort. AI-mediated DDD needs the name because the discomfort doesn&apos;t exist. The model looks clean. The verification passes. The architecture proceeds.&lt;/p&gt;
&lt;p&gt;And somewhere in your specification, a behavior that should react to events is sitting quietly as a constraint, waiting for someone to notice that the system doesn&apos;t do what the domain requires.&lt;/p&gt;
&lt;p&gt;That&apos;s the Classification Gap. The bug that passes every test.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;em&gt;Next: The architecture that emerged from building detection for problems like this was deeper than anything a team would have produced. Not because AI is smarter — but because it doesn&apos;t let you skip the hard parts.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;This is Post 7 of a 17-post series on Signal-Driven Development — a solo-practitioner DDD methodology built with AI.&lt;/em&gt; &lt;a href=&quot;link&quot;&gt;&lt;em&gt;Post 6: The Candidate Lifecycle&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;
</content:encoded></item><item><title>When Jobs to Be Done Meets Domain-Driven Design</title><link>https://blog.coada.dev/when-jtbd-meets-ddd/</link><guid isPermaLink="true">https://blog.coada.dev/when-jtbd-meets-ddd/</guid><description>There are moments in practice where two ideas you&apos;ve held separately for years suddenly click together — not because someone told you they were connected, but because you were working through a proble</description><pubDate>Sun, 22 Mar 2026 05:31:42 GMT</pubDate><content:encoded>&lt;p&gt;There are moments in practice where two ideas you&apos;ve held separately for years suddenly click together — not because someone told you they were connected, but because you were working through a problem and the structure fell out on its own.&lt;/p&gt;
&lt;p&gt;This happened to me recently while thinking through how user input normalizes into domain concepts. I was working with the Jobs to Be Done framework on one side and Domain-Driven Design building blocks on the other. And the mapping wasn&apos;t just convenient. It was nearly exact.&lt;/p&gt;
&lt;p&gt;What surprised me more: nobody seems to have published this connection.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Two Frameworks, Same Year, Same Problem&lt;/h2&gt;
&lt;p&gt;Clayton Christensen popularized Jobs to Be Done in &lt;em&gt;The Innovator&apos;s Solution&lt;/em&gt; in 2003. The core idea: customers don&apos;t buy products — they hire them to do a job. Understanding the job — the situation that triggers the need, the motivation behind the action, the outcome that signals completion — is the only reliable way to understand what to build.&lt;/p&gt;
&lt;p&gt;Eric Evans published &lt;em&gt;Domain-Driven Design: Tackling Complexity in the Heart of Software&lt;/em&gt; in 2003. Same year. His core idea: software should model the domain — not the database, not the UI, not the org chart. The domain model is the shared language between engineers and domain experts. Get the model right, and the system writes itself. Get the model wrong, and no amount of engineering fixes it.&lt;/p&gt;
&lt;p&gt;Two books. Same year. Completely different audiences — product people reading Christensen, architects reading Evans. They decomposed the same fundamental question: &lt;strong&gt;what should this system do, and why?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;They never cited each other.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Mapping&lt;/h2&gt;
&lt;p&gt;A JTBD statement has a canonical structure:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&quot;When I&lt;/strong&gt; [situation], &lt;strong&gt;I want to&lt;/strong&gt; [motivation], &lt;strong&gt;so I can&lt;/strong&gt; [outcome].&quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That structure isn&apos;t decorative — it&apos;s a decomposition. And each component maps directly to a DDD building block:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;JTBD Component&lt;/th&gt;
&lt;th&gt;What It Captures&lt;/th&gt;
&lt;th&gt;DDD Building Block&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Situation&lt;/strong&gt; (&quot;When I...&quot;)&lt;/td&gt;
&lt;td&gt;The state of the world that triggers the need&lt;/td&gt;
&lt;td&gt;Context / Preconditions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Motivation&lt;/strong&gt; (&quot;I want to...&quot;)&lt;/td&gt;
&lt;td&gt;The action the actor wants the system to perform&lt;/td&gt;
&lt;td&gt;Commands / Intents&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Outcome&lt;/strong&gt; (&quot;So I can...&quot;)&lt;/td&gt;
&lt;td&gt;The observable result that satisfies the job&lt;/td&gt;
&lt;td&gt;Domain Events / Post-conditions&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;p&gt;Read that table again. A product manager who writes &quot;When a compliance violation occurs, I want to be notified immediately, so I can remediate before audit&quot; has already decomposed three things:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A &lt;strong&gt;domain event&lt;/strong&gt; that triggers behavior (violation occurred)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A &lt;strong&gt;command&lt;/strong&gt; that expresses intent (notify me)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A &lt;strong&gt;post-condition&lt;/strong&gt; that defines success (remediation triggered before audit)&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;They don&apos;t know they&apos;ve done it. They&apos;ve never heard the term &quot;aggregate root.&quot; But the decomposition is structurally identical to what an architect would produce in an Event Storming session.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;What the Mapping Means&lt;/h2&gt;
&lt;p&gt;This isn&apos;t a party trick. The structural alignment has three real consequences.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;First, JTBD input normalizes directly into DDD concepts.&lt;/strong&gt; If you take a well-formed JTBD statement and decompose it — situation into context, motivation into command, outcome into event — you get the raw material for a domain model. No interpretation layer required. The structure does the translation.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Second, DDD gives JTBD structural rigor.&lt;/strong&gt; JTBD practitioners struggle with vague outcomes. &quot;So I can feel confident&quot; is a common pattern — and it&apos;s useless for both product and engineering. DDD&apos;s requirement for observable state changes forces outcomes to be concrete. Not feelings — events. Not aspirations — post-conditions. If the outcome can&apos;t be modeled as something that happened in the system, the JTBD statement is incomplete.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Third, the gap between product and engineering shrinks.&lt;/strong&gt; If JTBD input maps to DDD building blocks, then the translation layer between &quot;what the user needs&quot; and &quot;what the system models&quot; becomes mechanical, not interpretive. The architect doesn&apos;t have to guess what the product manager meant. The product manager doesn&apos;t have to understand aggregates. The decomposition structure itself bridges the vocabulary gap.&lt;/p&gt;
&lt;p&gt;This is the part that excites me most. The perennial friction between product and engineering isn&apos;t a people problem — it&apos;s a structural problem. Product teams and engineering teams decompose the same domain using different vocabularies. The JTBD-to-DDD mapping reveals that the vocabularies are isomorphic. They&apos;re describing the same structure with different words.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Event Storming Parallel&lt;/h2&gt;
&lt;p&gt;The mapping gets more interesting when you bring in Alberto Brandolini&apos;s Event Storming.&lt;/p&gt;
&lt;p&gt;Event Storming uses a specific flow: a &lt;strong&gt;domain event&lt;/strong&gt; (orange sticky) triggers a &lt;strong&gt;policy&lt;/strong&gt; or &lt;strong&gt;reaction&lt;/strong&gt;, which issues a &lt;strong&gt;command&lt;/strong&gt; (blue sticky), which produces a new &lt;strong&gt;domain event&lt;/strong&gt; (orange sticky). The canonical reactive loop:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Event → Policy → Command → Event&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Now look at the JTBD structure again:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Situation (something happened) → Motivation (I want to do something) → Outcome (something new happens)&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;And the DDD mapping:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Precondition (system state) → Command (intent) → Domain Event (state change)&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Three independent frameworks. Three different disciplines — product management, domain modeling, collaborative workshop facilitation. Developed across a decade (2003, 2003, ~2013). And the decomposition shape is the same:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Trigger → Action → Observable Result.&lt;/strong&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Shape of the Problem&lt;/h2&gt;
&lt;p&gt;Here&apos;s the thesis I keep coming back to.&lt;/p&gt;
&lt;p&gt;When independent frameworks converge on the same structure without ever referencing each other, the structure isn&apos;t a framework artifact — it&apos;s a property of the problem domain itself.&lt;/p&gt;
&lt;p&gt;Christensen wasn&apos;t thinking about domain events when he formalized the Situation → Motivation → Outcome structure. Evans wasn&apos;t thinking about jobs to be done when he modeled Precondition → Command → Event. Brandolini wasn&apos;t thinking about either when he laid out Event → Policy → Command → Event on a wall of sticky notes.&lt;/p&gt;
&lt;p&gt;They were all trying to answer the same question: &lt;em&gt;how do you rigorously decompose what a system should do and why?&lt;/em&gt; And they all arrived at the same shape.&lt;/p&gt;
&lt;p&gt;That shape — trigger, action, observable result — isn&apos;t Christensen&apos;s invention, or Evans&apos;s, or Brandolini&apos;s. It&apos;s the shape of the problem. Every rigorous decomposition method finds it, because it&apos;s the only shape that captures causality, intent, and outcome in a single structure.&lt;/p&gt;
&lt;p&gt;I&apos;ve seen this convergence pattern before. When you model a domain with temporal scope — events crossing bounded contexts over time — Event Storming&apos;s swim-lane vocabulary emerges naturally as a byproduct. You don&apos;t set out to build Event Storming. You include temporal scope in the design, and the reactive flow (event → policy → command → event) organizes itself into the same vocabulary Brandolini formalized. You don&apos;t find Event Storming. Event Storming finds you — the moment you model events crossing boundaries over time.&lt;/p&gt;
&lt;p&gt;The JTBD mapping is the same phenomenon one layer up. You don&apos;t set out to connect JTBD to DDD. You try to normalize user input into domain concepts, and the decomposition maps itself.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;What This Means for Practice&lt;/h2&gt;
&lt;p&gt;If you&apos;re a product manager writing JTBD statements, you&apos;re already doing domain decomposition. You just don&apos;t have the vocabulary to name what you&apos;ve produced. Learn enough DDD to recognize that your &quot;outcomes&quot; are domain events. It will make your JTBD statements sharper and your conversations with engineering more productive.&lt;/p&gt;
&lt;p&gt;If you&apos;re an architect practicing DDD, look at the JTBD statements your product team is writing. They&apos;re not requirements — they&apos;re decompositions. The situation is your precondition. The motivation is your command. The outcome is your event. You may find that the product team has already done half your Event Storming before the session starts.&lt;/p&gt;
&lt;p&gt;If you&apos;re a solo builder wearing both hats, this mapping is a gift. Write your own JTBD statements for the domain you&apos;re modeling. Then decompose them. Situation into context. Motivation into command. Outcome into event. You&apos;ll have the skeleton of a domain model before you draw a single diagram.&lt;/p&gt;
&lt;p&gt;And if you&apos;re skeptical — try it. Take any well-formed JTBD statement and apply the mapping. See if the DDD building blocks fall out. In every case I&apos;ve tested, they do. Not because the mapping is clever, but because the problem has a shape, and both frameworks found it.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;em&gt;Christensen and Evans published in the same year. They decomposed the same problem. They never cited each other. And the mapping is nearly exact.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Every JTBD statement is a domain event waiting to be named.&lt;/em&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;strong&gt;Further reading:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Clayton Christensen &amp;amp; Michael Raynor, &lt;em&gt;The Innovator&apos;s Solution&lt;/em&gt; (2003)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Eric Evans, &lt;em&gt;Domain-Driven Design: Tackling Complexity in the Heart of Software&lt;/em&gt; (2003)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Alberto Brandolini, &lt;em&gt;Introducing EventStorming&lt;/em&gt; (~2013; &lt;a href=&quot;http://eventstorming.com&quot;&gt;eventstorming.com&lt;/a&gt;)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href=&quot;http://narrativedriven.org&quot;&gt;narrativedriven.org&lt;/a&gt; — Narrative-Driven Development, temporal modeling, and the reactive path vocabulary&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
</content:encoded></item><item><title>The Candidate Lifecycle: When AI Models Your Domain, Who Confirms It?</title><link>https://blog.coada.dev/the-candidate-lifecycle-when-ai-models-your-domain-who-confirms-it/</link><guid isPermaLink="true">https://blog.coada.dev/the-candidate-lifecycle-when-ai-models-your-domain-who-confirms-it/</guid><description>This post marks a shift.
Posts 1 through 5 gave away a methodology — Signal-Driven Development, the gap report, the three-pass convergence process. That was the community gift. Use it, fork it, adapt </description><pubDate>Sun, 22 Mar 2026 05:17:18 GMT</pubDate><content:encoded>&lt;p&gt;This post marks a shift.&lt;/p&gt;
&lt;p&gt;Posts 1 through 5 gave away a methodology — Signal-Driven Development, the gap report, the three-pass convergence process. That was the community gift. Use it, fork it, adapt it.&lt;/p&gt;
&lt;p&gt;What follows is different. Over the next four posts, I&apos;m formalizing patterns that emerged from practicing SDD rigorously with an AI collaborator across production domains. These aren&apos;t theoretical observations. They&apos;re problems I hit, named, and solved — problems that every team incorporating AI into domain modeling will encounter.&lt;/p&gt;
&lt;p&gt;Nobody in the DDD community is publishing on these patterns, because nobody else has built a system that does AI-mediated domain modeling at this depth. That&apos;s not a boast. It&apos;s an observation about where the field is. &lt;a href=&quot;https://www.domainlanguage.com/articles/ai-components-deterministic-system/&quot;&gt;Evans&lt;/a&gt; is writing about integrating AI components into deterministic systems. &lt;a href=&quot;https://ddd.academy/accelerate-your-strategic-design-with-llms/&quot;&gt;DDD Europe 2026&lt;/a&gt; has workshops on LLM-assisted strategic design. The community is exploring the intersection. I&apos;ve been living in it for months.&lt;/p&gt;
&lt;p&gt;The first pattern is the Candidate Lifecycle. It answers a question that traditional DDD never needed to ask.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Implicit Consensus Problem&lt;/h2&gt;
&lt;p&gt;In traditional DDD, knowledge crunching is collaborative. A group of people — developers, domain experts, architects — sit in a room, put stickies on a wall, argue about language, and iterate toward a model. The model that survives the room is the model the team has agreed on.&lt;/p&gt;
&lt;p&gt;The agreement is implicit. Nobody votes on whether &lt;code&gt;OrderPlaced&lt;/code&gt; is the right event name. Nobody signs off on the aggregate boundary. The team converged through conversation, and the model reflects that convergence. When the developer writes code that mirrors the model, they&apos;re implementing a shared understanding — imperfect, informal, but collectively owned.&lt;/p&gt;
&lt;p&gt;This works because the feedback loop is human. When a domain expert says &quot;that&apos;s not how we think about it,&quot; the model changes. When a developer says &quot;I can&apos;t implement this boundary cleanly,&quot; the team revisits. The model is continuously validated by the people who created it.&lt;/p&gt;
&lt;p&gt;Now replace the room with an AI.&lt;/p&gt;
&lt;p&gt;You feed a product requirements document into a language model. The model produces a domain specification — bounded contexts, aggregates, events, commands, invariants. The specification is structurally plausible. The names sound right. The boundaries look reasonable. The event flows make sense.&lt;/p&gt;
&lt;p&gt;But who agreed to this model?&lt;/p&gt;
&lt;p&gt;The AI doesn&apos;t have domain expertise. It has statistical pattern completion trained on millions of documents that include DDD examples, software architecture discussions, and domain modeling content. When it names an aggregate &lt;code&gt;OrderFulfillment&lt;/code&gt; and places it in a &lt;code&gt;Logistics&lt;/code&gt; bounded context, it&apos;s not making a domain judgment. It&apos;s producing a statistically likely output given the input and its training distribution.&lt;/p&gt;
&lt;p&gt;The model might be right. It might be excellent. But the mechanism that produced it is fundamentally different from the mechanism that produces a model in a collaborative workshop. There was no implicit consensus. There was no domain expert pushback. There was no developer saying &quot;this boundary feels wrong.&quot; There was a prompt, a completion, and a plausible-looking output.&lt;/p&gt;
&lt;p&gt;This is the implicit consensus problem: &lt;strong&gt;traditional DDD&apos;s trust model breaks when AI generates the domain model.&lt;/strong&gt; The trust was always embedded in the process — we trust the model because we built it together. When AI builds it, that embedded trust doesn&apos;t exist.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Candidate Lifecycle&lt;/h2&gt;
&lt;p&gt;The Candidate Lifecycle is a design pattern for AI-mediated domain modeling. It establishes an explicit trust boundary between AI-generated output and confirmed domain knowledge.&lt;/p&gt;
&lt;p&gt;The core principle: &lt;strong&gt;nothing an AI produces is domain knowledge until an architect explicitly confirms it.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Every domain artifact that an AI generates — every aggregate, every event, every bounded context boundary, every invariant — enters the system as a &lt;em&gt;candidate&lt;/em&gt;. A candidate is a proposal with provenance. It carries what was proposed, why it was proposed (the AI&apos;s reasoning), what alternatives were considered, and what produced it (which model, which prompt strategy, which input).&lt;/p&gt;
&lt;p&gt;A candidate is not part of the domain model. It&apos;s a proposal &lt;em&gt;to&lt;/em&gt; the domain model. The domain model only changes when the architect does one of three things:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Confirm&lt;/strong&gt;: The candidate is correct. The AI&apos;s classification matches the architect&apos;s domain understanding. The aggregate should exist, the boundary is right, the event name is appropriate. The candidate becomes confirmed domain knowledge.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Override&lt;/strong&gt;: The candidate is partially correct but misclassified. The AI identified a real domain concept but categorized it wrong — it proposed an invariant where the architect recognizes a policy, or it placed a service in the wrong bounded context. The architect corrects the classification while preserving the underlying insight. The override is recorded with rationale.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reject&lt;/strong&gt;: The candidate is wrong. The AI hallucinated a concept, misinterpreted the input, or produced something that contradicts the architect&apos;s domain understanding. The rejection is recorded with rationale — because the rejection is itself domain knowledge. It documents what the domain is &lt;em&gt;not&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;The confirmation, override, and rejection are the new design surface. In traditional DDD, the design surface is the whiteboard — where you draw boundaries and name things. In AI-mediated DDD, the design surface is the confirmation boundary — where you decide which AI proposals become trusted domain knowledge and which don&apos;t.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Why Provenance Matters&lt;/h2&gt;
&lt;p&gt;Every candidate must carry provenance — metadata about what produced it and why. This isn&apos;t an implementation detail. It&apos;s a domain requirement.&lt;/p&gt;
&lt;p&gt;Provenance answers the question that every architect will eventually ask: &quot;Why does this aggregate exist?&quot; In traditional DDD, the answer is &quot;we decided in the March workshop.&quot; In AI-mediated DDD, the answer must be traceable: this aggregate was proposed by this model version, using this prompt strategy, in response to this input, with this reasoning, and it was confirmed by this architect on this date.&lt;/p&gt;
&lt;p&gt;Provenance serves three purposes.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Auditability&lt;/strong&gt;: When a domain model is the foundation for a production system, someone will eventually need to reconstruct why a particular decision was made. Provenance provides the full chain — from the input document, through the AI&apos;s interpretation, to the candidate proposal, through the architect&apos;s confirmation. Every link in the chain is recorded.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reproducibility&lt;/strong&gt;: When you update your AI model or change your prompt strategy, you need to understand what changed. Provenance lets you ask: &quot;Which candidates in this specification were produced by the previous model version? Have any of them been invalidated by the updated model&apos;s output?&quot; Without provenance, model upgrades are blind — you can&apos;t tell which parts of your domain model might be affected.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Trust calibration&lt;/strong&gt;: Over time, provenance data reveals patterns about AI reliability. Which types of domain concepts does the model classify well? Where does it consistently struggle? Provenance transforms individual confirmation decisions into aggregate insight about the AI&apos;s modeling capability. This is how the trust boundary becomes data-driven rather than faith-based.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Evans Connection&lt;/h2&gt;
&lt;p&gt;At &lt;a href=&quot;https://www.infoq.com/news/2024/03/Evans-ddd-experiment-llm/&quot;&gt;Explore DDD 2024&lt;/a&gt;, Evans framed a trained LLM on a ubiquitous language as effectively a bounded context. It has its own model of the domain, shaped by its training data and fine-tuning. It speaks a language that overlaps with but isn&apos;t identical to the domain expert&apos;s language.&lt;/p&gt;
&lt;p&gt;This is a powerful framing. And the Candidate Lifecycle is the answer to the question it raises: &lt;strong&gt;how does knowledge from the AI&apos;s bounded context become trusted domain knowledge in the architect&apos;s model?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In traditional DDD context mapping, we have patterns for this. When two bounded contexts need to share knowledge, we use patterns like Published Language, Anti-Corruption Layer, Customer-Supplier, or Conformist. Each pattern defines who owns the translation, who controls the contract, and how mismatches are handled.&lt;/p&gt;
&lt;p&gt;The AI&apos;s &quot;bounded context&quot; needs the same treatment. The AI produces output in its own model. That output must cross a trust boundary before it enters the architect&apos;s domain model. The Candidate Lifecycle is the translation mechanism — it&apos;s the Anti-Corruption Layer between the AI&apos;s statistical model and the architect&apos;s domain model.&lt;/p&gt;
&lt;p&gt;Evans is now &lt;a href=&quot;https://www.domainlanguage.com/articles/context-mapping-an-ai-based-component/&quot;&gt;writing explicitly about this pattern&lt;/a&gt; — drawing Anti-Corruption Layers between deterministic application code and probabilistic LLM output. We arrived at the same architectural conclusion independently. The AI&apos;s output must be constrained, translated, and explicitly accepted before it enters the deterministic system. The Candidate Lifecycle formalizes the acceptance mechanism for domain modeling specifically.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;What This Changes About DDD Practice&lt;/h2&gt;
&lt;p&gt;The Candidate Lifecycle has implications that go beyond the obvious.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The architect&apos;s role changes.&lt;/strong&gt; In traditional DDD, the architect is a creator — they model the domain through collaborative discovery. In AI-mediated DDD, the architect becomes a curator and a judge. The AI generates candidates at a speed and volume that no human modeler could match. The architect&apos;s job is to evaluate, confirm, override, reject, and document. The creative act shifts from &quot;invent the model&quot; to &quot;validate the model and improve it.&quot;&lt;/p&gt;
&lt;p&gt;This isn&apos;t a lesser role. It&apos;s a more rigorous one. The architect who evaluates fifty AI-proposed candidates and confirms thirty, overrides twelve, and rejects eight has made fifty explicit domain decisions — each documented with rationale. The architect who draws a model on a whiteboard has made the same decisions implicitly, with no record of what was considered and rejected.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Rejection becomes a first-class artifact.&lt;/strong&gt; In traditional DDD, rejected ideas are lost — they exist only in the memory of the people who were in the room. In the Candidate Lifecycle, every rejection is recorded with rationale. &quot;This was proposed as an aggregate, but it has no invariants and no independent lifecycle — it&apos;s a value object&quot; is domain knowledge. It documents what the domain is not, which constrains future modeling decisions and prevents the same mistake from being proposed again.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The speed of iteration changes.&lt;/strong&gt; A three-pass convergence that might take weeks with a human team can happen in hours with AI generating candidates and an architect curating them. But the curation can&apos;t be automated — that&apos;s the whole point. The AI proposes, the architect decides. The speed gain is in generation, not in judgment.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Provenance creates institutional memory.&lt;/strong&gt; When the architect who confirmed a set of candidates leaves the team, the provenance chain remains. The next architect can reconstruct not just what was decided, but why — including the AI&apos;s reasoning, the alternatives that were considered, and the rationale for each confirmation and rejection. This is better institutional memory than most teams have ever had for their domain models, because it was captured at decision time rather than reconstructed after the fact.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Pattern in Practice&lt;/h2&gt;
&lt;p&gt;If you&apos;re incorporating AI into your domain modeling process today — whether through ChatGPT, Claude, a fine-tuned model, or a purpose-built system — here&apos;s how to apply the Candidate Lifecycle manually:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Mark everything the AI produces as provisional.&lt;/strong&gt; Don&apos;t copy AI-generated domain concepts directly into your specification. Create a separate &quot;candidates&quot; section. Each candidate gets an ID, the AI&apos;s proposed classification (aggregate, event, policy, etc.), the AI&apos;s reasoning if available, and a status: pending, confirmed, overridden, or rejected.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Review candidates in bounded context order.&lt;/strong&gt; Start with context boundaries. Then aggregates within each context. Then events and commands within each aggregate. Confirmation cascades downward — confirming a bounded context doesn&apos;t confirm its aggregates, but rejecting a bounded context rejects everything inside it.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Document every override and rejection.&lt;/strong&gt; The override rationale (&quot;this was proposed as an invariant but it&apos;s actually a policy — it reacts to events rather than constraining state&quot;) is more valuable than the confirmation rationale. Overrides and rejections are where your domain understanding diverges from the AI&apos;s pattern matching. They&apos;re the signal.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Track which model version produced which candidates.&lt;/strong&gt; When you update your AI model or change your prompting approach, you need to know which parts of your domain specification were produced under the previous configuration. Provenance doesn&apos;t need to be sophisticated — &quot;GPT-4o, March 2026, prompt v2&quot; is sufficient for manual tracking.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Run the gap report after confirmation.&lt;/strong&gt; The gap report (&lt;a href=&quot;https://listenrightmeow.hashnode.dev/the-gap-report-ddds-missing-feedback-loop&quot;&gt;Post 5&lt;/a&gt;) evaluates the confirmed specification, not the raw AI output. Gaps found post-confirmation are real gaps in the architect&apos;s curated model — not noise from unreviewed AI proposals.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;What Comes Next&lt;/h2&gt;
&lt;p&gt;The Candidate Lifecycle establishes the trust boundary. But it assumes the candidates are at least structurally valid — that an aggregate is an aggregate, that a policy is a policy, that the AI&apos;s classification is correct even if the domain judgment is wrong.&lt;/p&gt;
&lt;p&gt;What happens when the classification itself is wrong? When the AI proposes something that passes every structural check, looks correct in every gap report, and produces a domain model that appears complete — but the behavioral semantics are fundamentally broken?&lt;/p&gt;
&lt;p&gt;That&apos;s the Classification Gap. Post 7.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;em&gt;This is Post 6 of a series on DDD, AI, and the methodology that emerged from practicing both rigorously.&lt;/em&gt; &lt;a href=&quot;https://listenrightmeow.hashnode.dev/the-gap-report-ddds-missing-feedback-loop&quot;&gt;&lt;em&gt;Post 5&lt;/em&gt;&lt;/a&gt; &lt;em&gt;delivered the gap report deep dive and the&lt;/em&gt; &lt;a href=&quot;https://github.com/listenrightmeow/signal-driven-development&quot;&gt;&lt;em&gt;SDD repository&lt;/em&gt;&lt;/a&gt;&lt;em&gt;. The series continues with the Classification Gap in Post 7.&lt;/em&gt;&lt;/p&gt;
</content:encoded></item><item><title>The Gap Report: DDD&apos;s Missing Feedback Loop</title><link>https://blog.coada.dev/the-gap-report-ddds-missing-feedback-loop/</link><guid isPermaLink="true">https://blog.coada.dev/the-gap-report-ddds-missing-feedback-loop/</guid><description>In Post 4, I introduced Signal-Driven Development and its core claim: DDD has never had a definition of done. SDD provides one — zero unresolved gaps across a structured convergence process.
But I lef</description><pubDate>Sun, 22 Mar 2026 05:03:38 GMT</pubDate><content:encoded>&lt;p&gt;In &lt;a href=&quot;https://listenrightmeow.hashnode.dev/introducing-signal-driven-development&quot;&gt;Post 4&lt;/a&gt;, I introduced Signal-Driven Development and its core claim: DDD has never had a definition of done. SDD provides one — zero unresolved gaps across a structured convergence process.&lt;/p&gt;
&lt;p&gt;But I left the gap report itself as a concept. This post makes it concrete. What does a gap report actually look like? What does it measure? How does the three-pass convergence trajectory work when you&apos;re sitting in front of a real domain specification?&lt;/p&gt;
&lt;p&gt;I&apos;m also releasing the &lt;a href=&quot;https://github.com/listenrightmeow/signal-driven-development&quot;&gt;SDD repository&lt;/a&gt; — templates for gap reports, resolution logs, domain specifications, and architecture palettes, plus a complete worked example showing three-pass convergence on a fictional domain. Grab the templates and run a pass on your own system. That&apos;s not a suggestion — it&apos;s the fastest way to understand whether SDD solves a problem you have.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Anatomy of a Gap Report&lt;/h2&gt;
&lt;p&gt;A gap report evaluates a domain specification against four categories. Each gap is a question the specification hasn&apos;t answered.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Structural Gaps (SG)&lt;/strong&gt; are missing or malformed elements. These are binary — the element exists or it doesn&apos;t. An aggregate without invariants. A command that doesn&apos;t produce a domain event. A bounded context with no declared relationships to other contexts.&lt;/p&gt;
&lt;p&gt;Structural gaps are the easiest to identify and the most dangerous to ignore. An aggregate without invariants is a consistency boundary that enforces nothing — it&apos;s a data structure with a misleading name. &lt;a href=&quot;https://www.amazon.com/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215&quot;&gt;Evans&lt;/a&gt; is explicit about this: the aggregate exists to protect invariants. If there are no invariants, there is no aggregate.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Heuristic Gaps (HG)&lt;/strong&gt; are patterns that violate established DDD principles. Unlike structural gaps, these aren&apos;t binary — they&apos;re threshold-based. &lt;a href=&quot;https://www.amazon.com/Implementing-Domain-Driven-Design-Vaughn-Vernon/dp/0321834577&quot;&gt;Vernon&apos;s&lt;/a&gt; small aggregate heuristic suggests no more than six commands per aggregate. &lt;a href=&quot;https://www.domainlanguage.com/ddd/&quot;&gt;Evans&apos;s&lt;/a&gt; bounded context principles suggest no more than three shared terms across contexts before you question whether the boundary is real. Saga step counts beyond five suggest decomposition.&lt;/p&gt;
&lt;p&gt;Every heuristic has a measurable default grounded in published DDD literature. Every default is overridable — because every domain has legitimate reasons to deviate. The gap report doesn&apos;t penalize deviation. It forces you to acknowledge and document it. The difference between an architect who exceeds a heuristic intentionally and one who exceeds it accidentally is the documentation of the decision.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Language Gaps (LG)&lt;/strong&gt; are ambiguities in the ubiquitous language. The same term used with different meanings across contexts without explicit declaration. Unnamed concepts referenced in multiple places. Overloaded terms where a single word carries two distinct domain meanings.&lt;/p&gt;
&lt;p&gt;Language gaps are subtle and consequential. When &quot;treatment&quot; means both &quot;the full clinical encounter&quot; and &quot;a single medical intervention,&quot; every conversation about treatments becomes ambiguous. The code will resolve the ambiguity — but it will resolve it silently, and different developers will resolve it differently.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Decision Gaps (DG)&lt;/strong&gt; are architectural choices that haven&apos;t been made or haven&apos;t been documented. A bounded context boundary that could reasonably be drawn in two places. A relationship type that&apos;s assumed but not declared. A scope decision that&apos;s implicit rather than explicit.&lt;/p&gt;
&lt;p&gt;Decision gaps are the gaps that define your architecture. The structural gaps and heuristic violations are usually mechanical to fix. The decision gaps require judgment, tradeoff reasoning, and the willingness to commit to a position and document why.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;What a Gap Looks Like&lt;/h2&gt;
&lt;p&gt;Here&apos;s a structural gap from a real worked example — a veterinary clinic domain specification:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;SG-01: Appointment aggregate has zero invariants&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Severity&lt;/strong&gt;: Error &lt;strong&gt;Rule&lt;/strong&gt;: Aggregates must protect at least one invariant — an aggregate without invariants has no consistency boundary to enforce. &lt;strong&gt;Specification element&lt;/strong&gt;: Appointment aggregate in Scheduling context &lt;strong&gt;Analysis&lt;/strong&gt;: Appointment defines four commands and four events but no invariants. What prevents double-booking the same time slot? What prevents checking in a cancelled appointment? What prevents rescheduling to a time in the past? Without invariants, the Appointment aggregate is a data container, not a consistency boundary. &lt;strong&gt;Recommendation&lt;/strong&gt;: Define invariants. At minimum: (1) No two appointments for the same veterinarian may overlap in time. (2) Appointment status must follow a valid lifecycle. (3) Rescheduled time must be in the future.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Notice the structure. The gap states what was measured (&lt;strong&gt;zero invariants&lt;/strong&gt;), why it matters (&lt;strong&gt;no consistency boundary&lt;/strong&gt;), what the consequences are (&lt;strong&gt;double-booking, invalid transitions&lt;/strong&gt;), and what to do about it (&lt;strong&gt;define specific invariants&lt;/strong&gt;). It&apos;s not a vague warning. It&apos;s an actionable diagnostic that tells you exactly where to look and what question to answer.&lt;/p&gt;
&lt;p&gt;Here&apos;s a heuristic gap from the same domain:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;HG-04: Zero sagas in a domain with multi-step processes&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Severity&lt;/strong&gt;: Warning &lt;strong&gt;Rule&lt;/strong&gt;: Domains with cross-aggregate, multi-step processes typically require at least one saga. &lt;strong&gt;Metric&lt;/strong&gt;: 0 sagas, 3 policies &lt;strong&gt;Analysis&lt;/strong&gt;: The full visit lifecycle spans multiple aggregates across multiple contexts: Appointment → Visit → Treatment → Invoice. This is a multi-step process with potential failure points. What if treatment is started but the visit is never closed? What if the visit is closed but invoice generation fails? Policies handle the happy path. There&apos;s no compensation or failure handling. &lt;strong&gt;Recommendation&lt;/strong&gt;: Evaluate the visit lifecycle as a saga candidate with compensation for each step.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;And here&apos;s a decision gap:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;DG-02: How does pricing work?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Severity&lt;/strong&gt;: Error &lt;strong&gt;Analysis&lt;/strong&gt;: PricingService &quot;calculates line item prices based on treatment codes and clinic pricing rules.&quot; But there&apos;s no pricing model, no price list aggregate, no pricing configuration. Where do prices come from? Are they per-treatment-code? Per-veterinarian? Time-based? The service exists but its data model is undefined. &lt;strong&gt;Recommendation&lt;/strong&gt;: Define a PriceList aggregate with pricing rules. Determine whether pricing is static or dynamic.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That decision gap is an error, not a warning, because it describes a domain concept that other parts of the specification depend on but that doesn&apos;t exist. Invoice generation references pricing. Pricing references nothing. There&apos;s a hole in the model where a concept should be.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Three-Pass Trajectory&lt;/h2&gt;
&lt;p&gt;The gap report becomes powerful across passes. Here&apos;s what convergence actually looks like, from the veterinary clinic worked example:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Pass 1&lt;/strong&gt;: 18 gaps identified. 5 errors, 13 warnings. The domain specification has all six aggregates named and placed. It has events flowing between contexts. It looks like a domain model. But three of six aggregates have zero invariants. The Treatment aggregate is a sequential pipeline pretending to be an aggregate. There&apos;s no saga handling a multi-step visit lifecycle. Billing has no pricing model. The veterinarian schedule doesn&apos;t exist as a domain concept.&lt;/p&gt;
&lt;p&gt;Every gap gets resolved. The resolution log documents what was decided and why. 16 accepted as recommended, 2 accepted with modification, 0 rejected. The specification grows: invariants go from 5 to 12, a saga is introduced, two new aggregates are added, a language overload is fixed.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Pass 2&lt;/strong&gt;: 5 gaps. Zero errors, 5 warnings. The structural problems are gone. What remains are refinement concerns — the saga needs a timeout, the walk-in path has an event ordering dependency, a pricing snapshot rule needs to be explicit. These are the decisions that define the architecture&apos;s resilience, not its structure.&lt;/p&gt;
&lt;p&gt;All five resolved. The invariant count climbs from 12 to 18. Every aggregate now has at least one invariant. Every cross-context relationship is declared with a type. Every scope decision is documented.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Pass 3&lt;/strong&gt;: Zero gaps. Zero errors. Zero warnings. Converged.&lt;/p&gt;
&lt;p&gt;The trajectory — 18 → 5 → 0 — is the signal. It tells you the methodology is working. Each pass reduces the gap count because the previous pass&apos;s resolutions addressed the root causes, not just the symptoms. When you fix the aggregate that has no invariants, you also fix the downstream gaps that depended on that aggregate having a consistency boundary. Foundational decisions resolve first; dependent decisions cascade.&lt;/p&gt;
&lt;p&gt;If Pass 2 had produced more gaps than Pass 1, that would be the most important signal: the specification is diverging, not converging. Something is structurally wrong — probably a foundational boundary decision that&apos;s incorrect, causing every resolution to introduce new inconsistencies. Non-convergence means stop, revisit the boundaries, and restart the pass.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Resolution Log&lt;/h2&gt;
&lt;p&gt;The gap report identifies questions. The resolution log records answers. Every resolution documents three things:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The decision&lt;/strong&gt;: What was chosen and what was rejected. An aggregate without invariants can be fixed by adding invariants (if it&apos;s a real consistency boundary) or dissolved (if it isn&apos;t). The resolution records which path was taken.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The rationale&lt;/strong&gt;: Why this decision was made. This is the most valuable artifact SDD produces. Six months from now, when someone asks &quot;why is this a saga instead of a policy chain?&quot; the resolution log has the answer — with the gap that prompted the question, the alternatives considered, and the reasoning that led to the current design.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The structural impact&lt;/strong&gt;: What changed in the specification. &quot;+1 saga, +1 event, +1 invariant.&quot; This makes the change traceable and auditable. Every element in the final specification traces back to either the initial extraction or a specific gap resolution.&lt;/p&gt;
&lt;p&gt;The resolution log is the architecture decision record that DDD always needed but never formalized at the domain modeling level. &lt;a href=&quot;https://adr.github.io/&quot;&gt;ADRs&lt;/a&gt; capture decisions about technology choices and system-level architecture. Resolution logs capture decisions about domain model structure — why this aggregate exists, why this boundary is drawn here, why this invariant matters.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Running Your Own Pass&lt;/h2&gt;
&lt;p&gt;You don&apos;t need tooling to try this. You need a domain specification and the gap report template.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt;: Pick a bounded context in your system. Write the domain specification — name every aggregate, every command, every event, every invariant, every policy, every saga. Make every relationship explicit. If you can&apos;t name it, it&apos;s a gap.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt;: Run the gap report against it. For each aggregate, ask: does it have invariants? For each command, ask: does it produce an event? For each bounded context, ask: are its relationships declared? Check the heuristic thresholds — command density, term overlap, saga step count. Look for language overloads and undocumented decisions.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 3&lt;/strong&gt;: Write the resolution log. For every gap, decide: change the model or document why the current design is intentional. Record the rationale.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 4&lt;/strong&gt;: Update the specification with the resolutions and run the gap report again. The gap count should drop. If it does, you&apos;re converging. If it doesn&apos;t, revisit your boundary decisions.&lt;/p&gt;
&lt;p&gt;The &lt;a href=&quot;https://github.com/listenrightmeow/signal-driven-development&quot;&gt;SDD repository&lt;/a&gt; has everything you need — templates for all four artifacts and a complete worked example showing three-pass convergence. The veterinary clinic example walks through 18 gaps across three passes with full resolution rationale for every decision.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Why This Matters&lt;/h2&gt;
&lt;p&gt;The gap report solves a problem that every DDD practitioner has felt but few have named: the anxiety of not knowing whether the model is done.&lt;/p&gt;
&lt;p&gt;You finish a domain modeling session. The event storming board is covered in stickies. The bounded contexts feel right. The aggregates have names. But there&apos;s a nagging uncertainty — did we miss something? Are the boundaries correct? Is that aggregate doing too much? Is that policy actually a saga?&lt;/p&gt;
&lt;p&gt;Without a gap report, the only way to answer those questions is experience. The architects who&apos;ve seen dozens of domain models can spot the patterns. The architects who haven&apos;t can&apos;t — and they won&apos;t know what they missed until implementation reveals it.&lt;/p&gt;
&lt;p&gt;The gap report makes the experienced architect&apos;s intuition explicit, measurable, and transferable. It asks the questions that a senior DDD practitioner would ask. It flags the patterns that Evans, Vernon, and Brandolini documented. It forces the decisions that matter into the open where they can be examined.&lt;/p&gt;
&lt;p&gt;Each gap is the question an experienced practitioner would ask. SDD asks it for you.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;What Comes Next&lt;/h2&gt;
&lt;p&gt;Post 5 gives you the methodology to try. The &lt;a href=&quot;https://github.com/listenrightmeow/signal-driven-development&quot;&gt;repository&lt;/a&gt; gives you the tools.&lt;/p&gt;
&lt;p&gt;But the gap report revealed something I didn&apos;t anticipate when I first built this process. When AI enters the domain modeling pipeline — when the specifications aren&apos;t authored by humans but generated by language models — a new category of failure emerges. Structurally valid models that are semantically wrong. Patterns that pass every gap report check but misrepresent the domain&apos;s actual behavior.&lt;/p&gt;
&lt;p&gt;Post 6 introduces the first pattern that SDD surfaced about AI-mediated domain modeling: the Candidate Lifecycle.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;em&gt;This is Post 5 of a series on DDD, AI, and the methodology that emerged from practicing both rigorously.&lt;/em&gt; &lt;a href=&quot;https://listenrightmeow.hashnode.dev/introducing-signal-driven-development&quot;&gt;&lt;em&gt;Post 4&lt;/em&gt;&lt;/a&gt; &lt;em&gt;introduced Signal-Driven Development. The series continues with AI-mediated domain modeling patterns starting in Post 6.&lt;/em&gt;&lt;/p&gt;
</content:encoded></item><item><title>Introducing Signal-Driven Development</title><link>https://blog.coada.dev/introducing-signal-driven-development/</link><guid isPermaLink="true">https://blog.coada.dev/introducing-signal-driven-development/</guid><description>Not &quot;done&quot; in the project management sense — not &quot;the sprint ended&quot; or &quot;the stakeholders signed off.&quot; Done in the engineering sense. Structurally complete. Semantically consistent. Ready for implement</description><pubDate>Sat, 21 Mar 2026 19:41:04 GMT</pubDate><content:encoded>&lt;p&gt;Not &quot;done&quot; in the project management sense — not &quot;the sprint ended&quot; or &quot;the stakeholders signed off.&quot; Done in the engineering sense. Structurally complete. Semantically consistent. Ready for implementation with confidence that the architecture will hold.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.amazon.com/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215&quot;&gt;Evans&lt;/a&gt; never answered this question. Neither did &lt;a href=&quot;https://www.amazon.com/Implementing-Domain-Driven-Design-Vaughn-Vernon/dp/0321834577&quot;&gt;Vernon&lt;/a&gt;, &lt;a href=&quot;https://www.eventstorming.com/&quot;&gt;Brandolini&lt;/a&gt;, or anyone else in the DDD community. And it&apos;s not because the question is unimportant — it&apos;s because the traditional answer was always implicit. You knew the model was done when the team stopped finding new insights during knowledge crunching. When the conversations with domain experts stopped producing surprises. When the event storming board stabilized.&lt;/p&gt;
&lt;p&gt;That&apos;s not a definition of done. That&apos;s a feeling.&lt;/p&gt;
&lt;p&gt;I spent the better part of a year doing rigorous domain modeling across multiple products — complex domains with regulatory concerns, event-sourced architectures, cross-context dependencies, and temporal behaviors that don&apos;t fit neatly into Evans&apos;s spatial model. I did this work as a solo practitioner with an AI collaborator, using the approach I described in the first three posts of this series.&lt;/p&gt;
&lt;p&gt;What emerged wasn&apos;t just a set of domain models. It was a methodology. One with a measurable definition of done, a repeatable convergence process, and a feedback loop that makes domain modeling adversarial in the best sense of the word.&lt;/p&gt;
&lt;p&gt;I&apos;m calling it Signal-Driven Development.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Core Problem: DDD Has No Feedback Loop&lt;/h2&gt;
&lt;p&gt;Domain-Driven Design gives you an extraordinary vocabulary for modeling complex systems. Bounded contexts. Aggregates. Domain events. Policies. Sagas. The building blocks are precise, expressive, and battle-tested across two decades of practice.&lt;/p&gt;
&lt;p&gt;What DDD doesn&apos;t give you is a way to know when you&apos;ve used them correctly.&lt;/p&gt;
&lt;p&gt;Consider the typical DDD workflow. You do knowledge crunching — workshops, event storming sessions, whiteboard conversations with domain experts. You iterate on the model. You refine bounded context boundaries. You identify aggregates and their invariants. At some point, someone says &quot;I think we&apos;re good&quot; and the team moves to implementation.&lt;/p&gt;
&lt;p&gt;But &quot;I think we&apos;re good&quot; is a subjective assessment. There&apos;s no structural verification. No way to measure whether the model is complete, whether the boundaries are consistent, whether the heuristics that &lt;a href=&quot;https://www.amazon.com/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215&quot;&gt;Evans&lt;/a&gt; and &lt;a href=&quot;https://www.amazon.com/Implementing-Domain-Driven-Design-Vaughn-Vernon/dp/0321834577&quot;&gt;Vernon&lt;/a&gt; established are actually being honored. The model might have an aggregate with twelve commands and no invariants — a consistency boundary that enforces nothing. It might have two bounded contexts sharing fifteen terms with identical definitions — contexts that aren&apos;t actually bounded. It might have a policy that emits six commands in response to a single event — a stateless reaction doing the work of a saga.&lt;/p&gt;
&lt;p&gt;These aren&apos;t obscure edge cases. They&apos;re the gaps that every experienced DDD practitioner has learned to spot through years of pattern recognition and hard-won intuition. The gaps that junior architects miss entirely. The gaps that AI-generated models introduce systematically, because an LLM has no intuition — only statistical pattern completion.&lt;/p&gt;
&lt;p&gt;DDD needs a feedback loop. Not a checklist. A diagnostic system that examines a domain specification structurally, measures it against the heuristics that the DDD community has established over twenty years, and produces a report that tells you exactly what&apos;s incomplete, what&apos;s inconsistent, and what violates the principles you claim to follow.&lt;/p&gt;
&lt;p&gt;That feedback loop is what I&apos;m calling a &lt;strong&gt;gap report&lt;/strong&gt;. And the methodology built around it is Signal-Driven Development.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Three-Pass Convergence&lt;/h2&gt;
&lt;p&gt;SDD&apos;s core mechanic is structured convergence through iterative gap resolution. The process works like this:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Pass 1&lt;/strong&gt; produces a domain specification — the full structural model of your domain expressed in DDD building blocks. Bounded contexts, aggregates, domain events, commands, policies, sagas, projections, invariants, value objects, domain services. Everything named, everything placed, every relationship explicit. The gap report for Pass 1 identifies what the specification cannot answer: missing invariants, boundary violations, heuristic threshold breaches, methodology process gaps. In a complex domain, Pass 1 typically surfaces 20 to 35 gaps.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Pass 2&lt;/strong&gt; resolves those gaps. This is where the real architecture happens. You&apos;re not generating a model — you&apos;re interrogating one. Every gap is a question the specification couldn&apos;t answer. Some gaps resolve by adding missing elements (an aggregate without invariants needs invariants, or it needs to be dissolved). Some resolve by restructuring (two contexts with heavy term overlap need a boundary reassessment). Some resolve by making an explicit architectural decision and documenting why. The gap count drops — typically to 5 to 10. If it doesn&apos;t drop, the specification is diverging rather than converging, and that divergence is itself a diagnostic signal.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Pass 3&lt;/strong&gt; drives to zero. The remaining gaps are usually the hardest — the architectural decisions that require genuine tradeoff reasoning. A saga with seven steps that might need decomposition. A bounded context whose name doesn&apos;t match its actual responsibility. An aggregate whose event fan-out suggests it&apos;s doing too much. These are the decisions that experienced architects agonize over in whiteboard sessions. SDD forces them into the open by making them measurable.&lt;/p&gt;
&lt;p&gt;The definition of done is zero unresolved gaps. Not &quot;zero gaps identified&quot; — gaps will always be identified. Zero &lt;em&gt;unresolved&lt;/em&gt; gaps. Every gap has been examined, and for each one, the architect has either changed the model to address it or documented why the current design is intentional. The resolution is the artifact, not the absence of the finding.&lt;/p&gt;
&lt;p&gt;Some domains require four or five passes. The three-pass label describes the typical trajectory, not a hard constraint. The invariant is convergence: each pass must reduce the gap count. If it doesn&apos;t, something is structurally wrong with the specification, and that non-convergence is the most important signal the process can produce.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Gap Reports as Diagnostic Signals&lt;/h2&gt;
&lt;p&gt;The gap report is the heart of SDD. It&apos;s not a test suite. It&apos;s not a linter output. It&apos;s a structured diagnostic that evaluates a domain specification against three categories of concern.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Structural completeness&lt;/strong&gt; asks whether the specification has the elements it needs. Does every aggregate have at least one invariant? Does every bounded context have a clear linguistic boundary? Are there commands without corresponding domain events? Are there domain events that no policy or projection reacts to? These aren&apos;t style preferences — they&apos;re the structural expectations that &lt;a href=&quot;https://www.amazon.com/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215&quot;&gt;Evans&lt;/a&gt; and &lt;a href=&quot;https://www.amazon.com/Implementing-Domain-Driven-Design-Vaughn-Vernon/dp/0321834577&quot;&gt;Vernon&lt;/a&gt; established. An aggregate without invariants isn&apos;t a design choice; it&apos;s a consistency boundary that enforces nothing.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Heuristic thresholds&lt;/strong&gt; measure whether the specification honors the quantitative guidelines the DDD community has developed through practice. Vernon&apos;s &lt;a href=&quot;https://www.amazon.com/Implementing-Domain-Driven-Design-Vaughn-Vernon/dp/0321834577&quot;&gt;small aggregate heuristic&lt;/a&gt; suggests no more than six commands per aggregate. Context term overlap beyond three shared definitions suggests insufficient separation. Saga step counts beyond five suggest decomposition is needed. These thresholds aren&apos;t arbitrary — they&apos;re grounded in two decades of published work by &lt;a href=&quot;https://www.domainlanguage.com/ddd/&quot;&gt;Evans&lt;/a&gt;, &lt;a href=&quot;https://www.amazon.com/Implementing-Domain-Driven-Design-Vaughn-Vernon/dp/0321834577&quot;&gt;Vernon&lt;/a&gt;, &lt;a href=&quot;https://www.eventstorming.com/&quot;&gt;Brandolini&lt;/a&gt;, and others. They&apos;re configurable, because every domain has legitimate reasons to deviate. But deviations should be conscious decisions, not invisible accidents.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Methodology process gaps&lt;/strong&gt; verify the discipline of the modeling process itself. Does every architectural decision have a documented rationale with at least one rejected alternative? Have gap resolutions been traced to specific changes in the specification? Is the gap count decreasing across passes? These are the meta-rules — the rules about the process, not the model.&lt;/p&gt;
&lt;p&gt;The critical insight is that gap reports are &lt;em&gt;signals&lt;/em&gt;, not verdicts. A gap doesn&apos;t mean the model is wrong. It means the model has a question it hasn&apos;t answered. The architect reads the signal, investigates, and either changes the model or documents why the current design is correct. Both outcomes are valid. The gap report&apos;s job is to surface the question. The architect&apos;s job is to answer it.&lt;/p&gt;
&lt;p&gt;This is what makes SDD adversarial in a productive way. The gap report is the colleague who keeps asking &quot;but why?&quot; — not to obstruct, but to force the kind of rigorous reasoning that produces architectures you can defend under scrutiny.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Architecture Palette&lt;/h2&gt;
&lt;p&gt;The third artifact in SDD — alongside the domain specification and the gap report — is the architecture palette. It&apos;s a visual projection of the domain specification expressed in DDD building blocks, organized by bounded context, showing the relationships between aggregates, events, commands, policies, and sagas.&lt;/p&gt;
&lt;p&gt;The palette serves two purposes. First, it&apos;s a communication artifact. A domain specification can be hundreds of elements across dozens of pages. The palette compresses that into a visual map that an architect can hold in working memory. Second, it&apos;s a verification surface. Structural patterns that are invisible in a textual specification become obvious in a visual layout — an aggregate that&apos;s connected to everything, a bounded context with no outbound events, a saga that spans three contexts when it should span two.&lt;/p&gt;
&lt;p&gt;The palette is the thing you put on the wall. The specification is the thing you trust. The gap report is the thing that tells you whether the specification deserves that trust.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;What This Looks Like in Practice&lt;/h2&gt;
&lt;p&gt;I&apos;ve run this process across eight product domains over the past year. Not toy examples — production systems with regulatory requirements, event-sourced architectures, and cross-product dependencies. Here&apos;s what the convergence trajectory actually looks like.&lt;/p&gt;
&lt;p&gt;A typical Pass 1 gap report surfaces 20 to 30 findings. Five to eight are structural errors — missing invariants, aggregates without consistency boundaries, commands that don&apos;t produce events. Ten to fifteen are heuristic threshold violations — aggregates with too many commands, contexts with overlapping vocabulary, policies doing the work of sagas. The rest are methodology gaps — architectural decisions made without documented alternatives, gap resolutions without traceability.&lt;/p&gt;
&lt;p&gt;Pass 2 resolves all of them. The structural errors are usually straightforward — add the missing invariant, dissolve the aggregate that has no reason to exist, split the policy into a saga. The heuristic violations require judgment — sometimes the threshold is right and the model needs to change, sometimes the domain genuinely requires a larger aggregate and the override needs to be documented. The methodology gaps are discipline — go back and document the reasoning.&lt;/p&gt;
&lt;p&gt;Pass 3 finds the residuals. In a complex domain, there are usually two to five gaps that survived Pass 2 — often the ones that require genuine architectural tradeoffs. A bounded context boundary that could reasonably be drawn in two places. A saga decomposition that improves one metric at the cost of another. These are the decisions that define the architecture, and SDD&apos;s contribution is forcing them into explicit, documented, measurable resolution rather than leaving them as implicit assumptions buried in the code.&lt;/p&gt;
&lt;p&gt;By the end of Pass 3, the gap count is zero. Every structural element has been verified. Every heuristic threshold has been honored or consciously overridden. Every architectural decision has been documented with alternatives considered and rationale recorded.&lt;/p&gt;
&lt;p&gt;That&apos;s a definition of done.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Part We Didn&apos;t Expect&lt;/h2&gt;
&lt;p&gt;I want to be transparent about the intellectual path that led here.&lt;/p&gt;
&lt;p&gt;SDD emerged from practice. I didn&apos;t start with a methodology and apply it. I started with a problem — how do you do rigorous domain modeling without a room full of people? — and iterated on the process until something repeatable crystallized. The three-pass convergence, the gap report categories, the architecture palette, the definition of done — all of it came from doing the work and noticing what worked.&lt;/p&gt;
&lt;p&gt;I didn&apos;t research what the DDD community&apos;s leading voices were currently publishing until months after the methodology had stabilized. When I finally did — when I read Evans&apos;s &lt;a href=&quot;https://www.infoq.com/news/2024/03/Evans-ddd-experiment-llm/&quot;&gt;Explore DDD 2024 keynote&lt;/a&gt;, when I looked at what &lt;a href=&quot;https://www.domainlanguage.com/&quot;&gt;Domain Language is now focused on&lt;/a&gt;, when I read Khononov&apos;s work on coupling as a measurable heuristic — the convergence was startling.&lt;/p&gt;
&lt;p&gt;Evans is now focused on &lt;a href=&quot;https://www.domainlanguage.com/articles/ai-components-deterministic-system/&quot;&gt;integrating AI into domain-rich systems&lt;/a&gt; while preserving design integrity. His keynote framing — that a trained LLM on a ubiquitous language is effectively a bounded context — is the same conclusion I reached independently while building the constraints that keep AI output within a closed DDD vocabulary. Same destination, completely different paths. He&apos;s since published a &lt;a href=&quot;https://www.domainlanguage.com/articles/context-mapping-an-ai-based-component/&quot;&gt;follow-up on context mapping with AI-based components&lt;/a&gt; — drawing an Anti-Corruption Layer between deterministic application code and probabilistic LLM output. We built the same pattern independently.&lt;/p&gt;
&lt;p&gt;Khononov&apos;s &lt;a href=&quot;https://www.informit.com/store/balancing-coupling-in-software-design-universal-design-9780137353538&quot;&gt;&lt;em&gt;Balancing Coupling in Software Design&lt;/em&gt;&lt;/a&gt; (Addison-Wesley, 2024) formalizes coupling as a measurable design heuristic with an optimizable function — the same pattern as SDD&apos;s threshold model. Take the qualitative principles Evans established, make them quantitative, set configurable thresholds, measure against them. He arrived at it through academic rigor. I arrived at it through building a system that needed to verify domain models automatically.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://2026.dddeurope.com/&quot;&gt;DDD Europe 2026&lt;/a&gt; has &lt;a href=&quot;https://ddd.academy/accelerate-your-strategic-design-with-llms/&quot;&gt;workshops on accelerating strategic design with large language models&lt;/a&gt; — Thomas Coopman&apos;s two-day session in Antwerp this June. The community is mainstreaming the intersection of AI and DDD as a topic. We&apos;ve been living in that intersection for months.&lt;/p&gt;
&lt;p&gt;I&apos;m not claiming priority. I&apos;m observing convergence. When independent practitioners arrive at the same conclusions from different starting points, it&apos;s not a coincidence — it&apos;s the problem asserting its own shape. The DDD community is converging on the need for measurable heuristics, AI-mediated modeling, and structural verification because those are the problems that surface when you take DDD seriously at scale. Whether you start from Evans&apos;s theory or from a solo practitioner&apos;s frustration, the same walls appear.&lt;/p&gt;
&lt;p&gt;The domain specification that emerged from three-pass convergence was structurally complete enough to verify context provenance — without a single line of implementation code. The design was the proof.&lt;/p&gt;
&lt;p&gt;That&apos;s not a theoretical claim. That&apos;s a measured result from applying this methodology to a real product domain. The specification produced by SDD&apos;s convergence process contained enough structural information that compliance verification could be performed against the domain model directly — before any runtime existed to test against. The architecture didn&apos;t need to be built to be verified. It needed to be modeled rigorously enough that verification was a projection of the model itself.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;What SDD Is Not&lt;/h2&gt;
&lt;p&gt;SDD is not a replacement for DDD. It&apos;s an extension. &lt;a href=&quot;https://www.amazon.com/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215&quot;&gt;Evans&lt;/a&gt; gave us the building blocks — the vocabulary for decomposing complex domains into bounded contexts, aggregates, and domain events. &lt;a href=&quot;https://www.amazon.com/Implementing-Domain-Driven-Design-Vaughn-Vernon/dp/0321834577&quot;&gt;Vernon&lt;/a&gt; gave us the implementation patterns — the tactical guidance for turning those building blocks into working code. &lt;a href=&quot;https://www.eventstorming.com/&quot;&gt;Brandolini&lt;/a&gt; gave us the discovery vocabulary — Event Storming as a method for collaborative knowledge crunching. &lt;a href=&quot;https://narrativedriven.org/&quot;&gt;Narrative-Driven Development&lt;/a&gt; gave us the temporal dimension — the recognition that domains exist in time, not just in space.&lt;/p&gt;
&lt;p&gt;SDD gives DDD a feedback loop and a definition of done.&lt;/p&gt;
&lt;p&gt;The gap report doesn&apos;t replace knowledge crunching. It makes knowledge crunching measurable. The three-pass convergence doesn&apos;t replace architectural intuition. It forces intuition into the open where it can be examined, challenged, and documented. The architecture palette doesn&apos;t replace event storming boards. It persists them.&lt;/p&gt;
&lt;p&gt;If you practice DDD, you can practice SDD tomorrow. The process is the same — model the domain, interrogate the model, refine the model. SDD adds structure to the interrogation and a measurable endpoint to the refinement.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;What Comes Next&lt;/h2&gt;
&lt;p&gt;The templates are coming — gap report templates, architecture palette formats, domain specification structures. Everything you need to run a three-pass convergence on your own domain. I&apos;ll share them through a public repository designed for practitioners who want to try SDD on a real project, not a tutorial exercise.&lt;/p&gt;
&lt;p&gt;Post 5 will go deep on the gap report itself — what the categories look like, how the three-pass trajectory works in detail, and how to read the signals that a gap report produces. That&apos;s where the methodology becomes concrete enough to apply.&lt;/p&gt;
&lt;p&gt;But the gap report is just the beginning. SDD surfaced patterns I didn&apos;t anticipate — patterns about what happens when AI enters the domain modeling process, about what structural verification reveals when it catches problems that humans can&apos;t see, about what a rigorous feedback loop does to the quality of architectural decisions over time.&lt;/p&gt;
&lt;p&gt;Those patterns are the subject of the rest of this series.&lt;/p&gt;
&lt;p&gt;SDD doesn&apos;t replace DDD. It gives DDD a definition of done.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;em&gt;This is Post 4 of a series on DDD, AI, and the methodology that emerged from practicing both rigorously. Posts 1–3 established the solo-builder problem, the AI collaboration model, and the vocabulary gap. The series continues with a deep dive into the gap report in Post 5.&lt;/em&gt;&lt;/p&gt;
</content:encoded></item><item><title>The Coherence Problem</title><link>https://blog.coada.dev/the-coherence-problem/</link><guid isPermaLink="true">https://blog.coada.dev/the-coherence-problem/</guid><description>Everyone&apos;s celebrating velocity. Nobody&apos;s talking about what happens when you succeed.
Everyone&apos;s talking about AI making solo developers faster.
Ship a SaaS in a weekend. Replace your junior devs. Bu</description><pubDate>Mon, 16 Mar 2026 06:42:13 GMT</pubDate><content:encoded>&lt;p&gt;&lt;em&gt;Everyone&apos;s celebrating velocity. Nobody&apos;s talking about what happens when you succeed.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Everyone&apos;s talking about AI making solo developers faster.&lt;/p&gt;
&lt;p&gt;Ship a SaaS in a weekend. Replace your junior devs. Build an MVP before lunch.&lt;/p&gt;
&lt;p&gt;Cool. But nobody&apos;s talking about what happens when you &lt;em&gt;succeed&lt;/em&gt;.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Week That Changed My Framing&lt;/h2&gt;
&lt;p&gt;I spent the last week maintaining a multi-product software ecosystem — alone, with AI as my architecture partner. Not generating code. Not shipping features. Managing the &lt;em&gt;coherence&lt;/em&gt; of a system that&apos;s grown complex enough to behave like a team-scale project.&lt;/p&gt;
&lt;p&gt;One architectural decision triggered documentation updates across 21 pages. A single boundary change cascaded through 5 product requirement documents, 4 platform-level architecture decision records, and a dozen cross-references that all needed to stay internally consistent.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;That&apos;s not a speed problem. That&apos;s a coherence problem. And it&apos;s the problem nobody warns you about when they celebrate the solo AI dev.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Speed vs. Coherence&lt;/h2&gt;
&lt;p&gt;Here&apos;s what I&apos;ve learned building this way:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;AI doesn&apos;t make you faster. It makes you capable of holding more complexity in working memory than one person should be able to.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That&apos;s a fundamentally different thing.&lt;/p&gt;
&lt;p&gt;Speed means you do the same work in less time. Coherence means you do work that wasn&apos;t possible before — maintaining the kind of cross-system architectural consistency that used to require a team of people whose entire job was keeping the model straight.&lt;/p&gt;
&lt;p&gt;When I work with AI on architecture, the value isn&apos;t autocomplete. It&apos;s that my AI partner can hold the full specification of a multi-product ecosystem in context while I make a decision — and then execute the downstream implications of that decision across every artifact it touches. In real time. Without drift.&lt;/p&gt;
&lt;p&gt;A human team doing this would need an architect who understands the decision, a technical writer updating the docs, a project manager tracking the cascade, and a QA engineer verifying consistency.&lt;/p&gt;
&lt;p&gt;I have one terminal and a conversation.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Part Nobody Warns You About&lt;/h2&gt;
&lt;p&gt;But here&apos;s the part the &quot;AI productivity&quot; narrative misses entirely:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The system got complex enough that I had to build a new service just to manage the dependency graph between my own architectural artifacts.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Read that again.&lt;/p&gt;
&lt;p&gt;I didn&apos;t build it because I wanted to. I built it because the cascade problem — one change rippling through dozens of documents — became its own engineering challenge. The AI partnership made me productive enough to create a system that now requires its own tooling to maintain.&lt;/p&gt;
&lt;p&gt;That&apos;s not a failure. That&apos;s what real architecture looks like. It&apos;s just that most solo developers never get there because without AI, the complexity ceiling hits you long before the architecture demands it.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Question Nobody&apos;s Asking&lt;/h2&gt;
&lt;p&gt;The discourse right now is fixated on velocity. How fast can you ship. How many lines of code per day. How quickly you can go from idea to deployment.&lt;/p&gt;
&lt;p&gt;I&apos;d argue the more interesting question is:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;How much architectural integrity can one person sustain?&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Because the companies that win long-term aren&apos;t the ones that shipped fastest. They&apos;re the ones whose architecture held up when it mattered — when the edge cases arrived, when the compliance audit landed, when the system had to evolve without a rewrite.&lt;/p&gt;
&lt;p&gt;If AI is genuinely changing what a solo builder can accomplish, the interesting frontier isn&apos;t &quot;build it faster.&quot; It&apos;s build it with the kind of structural rigor that used to require a team standing behind you.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;The solo dev + AI revolution is real. I&apos;m living it.&lt;/p&gt;
&lt;p&gt;But if all you&apos;re using AI for is speed, you&apos;re solving the wrong problem. Speed was never the bottleneck. The bottleneck was always one person trying to hold an entire system in their head without losing the thread.&lt;/p&gt;
&lt;p&gt;AI doesn&apos;t speed up that work. It makes it possible for the first time.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;That&apos;s the unlock. Not velocity. Coherence.&lt;/strong&gt;&lt;/p&gt;
</content:encoded></item><item><title>The Single-Seat Architect</title><link>https://blog.coada.dev/the-single-seat-architect/</link><guid isPermaLink="true">https://blog.coada.dev/the-single-seat-architect/</guid><description>A lot of professionals aren&apos;t making it back into the market.
That&apos;s not a prediction. That&apos;s what&apos;s happening right now. And if you&apos;re in this industry, you already know someone it&apos;s happened to. You</description><pubDate>Mon, 16 Mar 2026 06:14:10 GMT</pubDate><content:encoded>&lt;p&gt;A lot of professionals aren&apos;t making it back into the market.&lt;/p&gt;
&lt;p&gt;That&apos;s not a prediction. That&apos;s what&apos;s happening right now. And if you&apos;re in this industry, you already know someone it&apos;s happened to. You probably just haven&apos;t said it out loud yet.&lt;/p&gt;
&lt;p&gt;So let me say it.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;How We Got Here&lt;/h2&gt;
&lt;p&gt;A non-trivial percentage of CEOs discovered ChatGPT sometime in the last two years. They typed a prompt. They got something back that looked like a product spec, or a business plan, or a chunk of code. And something broke in their brain.&lt;/p&gt;
&lt;p&gt;They didn&apos;t learn what AI can do. They learned what AI &lt;em&gt;looks like&lt;/em&gt; it can do. And that&apos;s a dangerous distinction — because now they think building software is easy. They think the reason it was hard before was the developers, not the problem. They think the $200/hr architect was overhead, not the only person in the room who understood why the last three projects failed.&lt;/p&gt;
&lt;p&gt;This isn&apos;t new. Developers have been treated like they were a dime-a-dozen for fifteen years. Every outsourcing wave, every bootcamp boom, every &quot;we&apos;ll just hire three juniors instead of one senior&quot; decision — it all came from the same place: the belief that software is a labor problem, not a thinking problem.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;AI didn&apos;t create that delusion. It compounded it.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;hr /&gt;
&lt;h2&gt;What Actually Changed&lt;/h2&gt;
&lt;p&gt;Now the thinking goes: if AI can generate code, why do I need a team? If ChatGPT can write a PRD, why do I need a product person? If I can get a working prototype in a weekend, why was that project estimated at six months?&lt;/p&gt;
&lt;p&gt;And here&apos;s the thing — they&apos;re not entirely wrong that the landscape changed. They&apos;re just catastrophically wrong about &lt;em&gt;what changed&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;What changed isn&apos;t that software got easier to build. What changed is that the floor collapsed. The baseline tasks that used to require a human — writing boilerplate, translating specs into code, wiring up CRUD endpoints — those aren&apos;t jobs anymore. They&apos;re prompts.&lt;/p&gt;
&lt;p&gt;The hard problems are exactly as hard as they were before AI. System design. Ambiguity resolution. Figuring out why the requirements contradict each other before you&apos;ve built the wrong thing for six months. Understanding regulatory constraints. Knowing when the architecture will collapse under scale and why. Making trade-offs between consistency and availability when a product person is staring at you waiting for a yes.&lt;/p&gt;
&lt;p&gt;AI can&apos;t do any of that. Not because the models aren&apos;t good enough yet. Because those problems require judgment, context, and the kind of earned intuition that only comes from having been wrong enough times to recognize the shape of the next mistake before you make it.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Other Side&lt;/h2&gt;
&lt;p&gt;I know this because I lived the other side of it.&lt;/p&gt;
&lt;p&gt;I spent three quarters unemployed last year. Twenty-plus years of building software, and I couldn&apos;t get back in. Companies everywhere were claiming they were hiring, but I honestly don&apos;t think 90% of them were. Many were preparing for headcount budgets that never materialized. Others were taking advantage of a flooded market, low-balling experienced engineers 50 to 70 percent below their market rate — and finding takers, because people were desperate.&lt;/p&gt;
&lt;p&gt;The market was nasty. From what I&apos;m hearing, it still is.&lt;/p&gt;
&lt;p&gt;And while I was in that gap, I watched the AI narrative accelerate in real time. Every week, another LinkedIn post about shipping a product in a weekend. Another CEO tweeting about replacing their engineering team. Another think piece about how developers need to &quot;adapt or die&quot; — written by someone who&apos;s never had to adapt to anything harder than a new iPhone.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Who&apos;s Actually Coming Back&lt;/h2&gt;
&lt;p&gt;But here&apos;s what I noticed from the other side.&lt;/p&gt;
&lt;p&gt;The professionals starting to make it back aren&apos;t the ones with the deepest specialization. They&apos;re the ones with the widest experience. The developer who also understood product. The backend engineer who also did infrastructure. The architect who&apos;d sat in sales calls and heard what the customer actually needed versus what the ticket said.&lt;/p&gt;
&lt;p&gt;They didn&apos;t just learn to code. They learned to think across boundaries. And now, paired with AI, those people are devastating.&lt;/p&gt;
&lt;p&gt;A new role is settling under that collapsing floor. I&apos;d call it the &lt;strong&gt;single-seat architect&lt;/strong&gt; — someone who spent years accumulating lateral experience across product, engineering, data, infrastructure, and operations, and who now discovers that AI gives them the leverage to build entire products on their own.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Not prototypes. Not MVPs held together with API calls and prayer. Actual products with real architecture, real domain modeling, real compliance, real coherence across every layer of the stack.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That&apos;s not hype. I&apos;m building this way right now — solo, with AI as my architecture partner. The rigor isn&apos;t lesser because there&apos;s no team. It&apos;s different. AI doesn&apos;t replace the people I&apos;ve worked with over the years. It replaces the limitations of working alone.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Uncomfortable Part&lt;/h2&gt;
&lt;p&gt;The market isn&apos;t going to correct for this kindly. There will not be a gentle transition period where displaced specialists reskill into architects over a few months of online courses. Architecture isn&apos;t a certification. Lateral thinking isn&apos;t a bootcamp. The judgment that makes someone valuable in the AI era was built over years of cross-disciplinary work — product decisions, infrastructure trade-offs, customer conversations, failed projects, recovered projects.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;You can&apos;t speedrun that.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;So when I hear people say &quot;AI won&apos;t replace developers, it&apos;ll just change what developers do&quot; — I think that&apos;s a comforting lie dressed up as optimism. AI is already replacing developers. What it won&apos;t replace is the person who knows which problem to solve, how the pieces fit together, and what&apos;s going to break at 3 AM when the architecture can&apos;t support what sales just promised.&lt;/p&gt;
&lt;p&gt;If that&apos;s you — if you spent your career going wide instead of just deep — this is your moment. The market has never valued lateral thinkers more than it does right now, even if it doesn&apos;t know how to say that in a job posting yet.&lt;/p&gt;
&lt;p&gt;And if that&apos;s not you yet — stop learning another framework. Start learning an adjacent discipline. Product. Data. Infrastructure. Compliance. Sales. Anything that forces you to think about software as a &lt;em&gt;system&lt;/em&gt; rather than a &lt;em&gt;stack&lt;/em&gt;.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;The floor is gone. Nobody&apos;s going to rebuild it for you. Not your employer. Not a bootcamp. Not the next framework.&lt;/p&gt;
&lt;p&gt;But if you&apos;ve spent your career learning how things break — not just how they&apos;re built — you don&apos;t need a floor. You never did. The floor was for people who needed something to stand on. You&apos;re the person other people called when the floor gave out.&lt;/p&gt;
&lt;p&gt;That hasn&apos;t changed. The market just forgot for a minute.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Remind them.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
</content:encoded></item><item><title>The Reactive Path Has No Vocabulary</title><link>https://blog.coada.dev/the-reactive-path-has-no-vocabulary/</link><guid isPermaLink="true">https://blog.coada.dev/the-reactive-path-has-no-vocabulary/</guid><description>I&apos;ve been practicing Domain-Driven Design for over a decade. I&apos;ve built event-sourced systems. I&apos;ve implemented CQRS. I&apos;ve drawn bounded context boundaries, defined aggregates, modeled domain events w</description><pubDate>Tue, 10 Mar 2026 09:45:42 GMT</pubDate><content:encoded>&lt;p&gt;I&apos;ve been practicing Domain-Driven Design for over a decade. I&apos;ve built event-sourced systems. I&apos;ve implemented CQRS. I&apos;ve drawn bounded context boundaries, defined aggregates, modeled domain events with intentional language. I&apos;d have told you, with confidence, that I understood the methodology.&lt;/p&gt;
&lt;p&gt;Then I ran a structured assessment on my own knowledge and found a gap I&apos;d been building around for years.&lt;/p&gt;
&lt;p&gt;Not a minor gap. Not an edge case or an advanced technique I hadn&apos;t gotten to yet. A fundamental gap in how I modeled half of every system I&apos;d ever built.&lt;/p&gt;
&lt;p&gt;I want to tell you what it was, because I think it reveals something about how DDD actually transfers knowledge — and why that transfer mechanism fails for more people than anyone admits.&lt;/p&gt;
&lt;p&gt;Here&apos;s the setup. If you&apos;ve worked with event-driven architecture, you know there are two sides to the system. There&apos;s the command path — where requests come in, business rules get enforced, and domain events get emitted. And there&apos;s the reactive path — where those events get consumed and the system responds.&lt;/p&gt;
&lt;p&gt;My command path was clean. Domain-organized. Rigorous naming. Aggregates enforced invariants. Commands expressed intent in business language. Value Objects carried meaning. If you read the command side of my code, you could understand what the business did.&lt;/p&gt;
&lt;p&gt;My reactive path was a different story.&lt;/p&gt;
&lt;p&gt;Every function that consumed a domain event was an &quot;event handler.&quot; That was the entire vocabulary. Hundreds of functions across multiple systems, all filed under one undifferentiated category. Some of them transformed data into read models. Some of them evaluated conditions and issued commands to other aggregates. Some of them coordinated multi-step workflows across time, tracking state across multiple events. All of them were &quot;event handlers.&quot;&lt;/p&gt;
&lt;p&gt;I wasn&apos;t modeling the reactive side of my systems. I was just coding it.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I wasn&apos;t modeling the reactive side of my systems. I was just coding it.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The gap became visible when I ran a structured assessment — a systematic walk through DDD&apos;s conceptual landscape, starting from philosophical foundations and working forward through tactical patterns, strategic design, and the post-Evans vocabulary that&apos;s emerged in the last twenty years.&lt;/p&gt;
&lt;p&gt;I did this with an AI collaborator, specifically to pressure-test what I thought I knew against what I actually knew. Not to learn DDD from scratch — to find the holes that years of practice had papered over.&lt;/p&gt;
&lt;p&gt;The hole appeared in the reactive path.&lt;/p&gt;
&lt;p&gt;The DDD community — particularly through practitioners like Alberto Brandolini&apos;s &lt;a href=&quot;https://www.eventstorming.com/&quot;&gt;Event Storming&lt;/a&gt; work and the broader &lt;a href=&quot;https://www.narrativedriven.org/article/introduction-to-ndd&quot;&gt;Narrative-Driven Development (NDD)&lt;/a&gt; movement — has developed precise vocabulary for the reactive side of a system. Three concepts, each with distinct responsibilities:&lt;/p&gt;
&lt;p&gt;A &lt;strong&gt;Projection&lt;/strong&gt; is pure data transformation. An event arrives, data moves into a read model. No business logic. No decisions. Just reshaping data for consumption.&lt;/p&gt;
&lt;p&gt;A &lt;strong&gt;Policy&lt;/strong&gt; is a decision point. &quot;When X happens, evaluate a condition, and if it holds, issue command Y.&quot; One event in, one command out. Stateless. The bridge between &quot;something happened&quot; and &quot;decide what to do about it.&quot;&lt;/p&gt;
&lt;p&gt;A &lt;strong&gt;Saga&lt;/strong&gt; is coordination across time. &quot;When X happens, do Y, then wait for Z, then if Z succeeds do W, but if Z fails compensate by doing V.&quot; Stateful. Tracks progress across multiple events and commands. No single aggregate owns the flow — the saga manages it as a long-running process.&lt;/p&gt;
&lt;p&gt;These are not interchangeable. A projection has no business logic. A policy makes a single decision. A saga tracks state across steps. They have different responsibilities, different stateful characteristics, different failure modes, and critically, different testing strategies. Treating them as the same thing — &quot;event handlers&quot; — collapses three distinct architectural concerns into one bucket.&lt;/p&gt;
&lt;p&gt;That&apos;s exactly what I&apos;d been doing. Across every system I&apos;d built.&lt;/p&gt;
&lt;p&gt;And here&apos;s the part that stung: I&apos;d read about all three. I&apos;d encountered the terms. I could have given you a textbook definition of each one. But textbook definitions and operational understanding are completely different things. The concepts existed in my reading vocabulary but not in my modeling vocabulary. I knew the words. I didn&apos;t use them to think.&lt;/p&gt;
&lt;p&gt;This is where Evans&apos;s deeper insight about language hits home — the one most practitioners skim past on their way to learning aggregate patterns. Evans embedded a claim in DDD that comes directly from linguistics: the &lt;a href=&quot;https://en.wikipedia.org/wiki/Linguistic_relativity&quot;&gt;Sapir-Whorf hypothesis&lt;/a&gt;. The idea that the language available to you shapes — and constrains — what you can think.&lt;/p&gt;
&lt;p&gt;If you don&apos;t have a word for something, you will struggle to model it.&lt;/p&gt;
&lt;p&gt;I didn&apos;t have operational words for the reactive path. So I didn&apos;t model it. I coded it. The functions worked. The events got processed. But the structural distinctions that would have made the architecture legible, testable, and evolvable were absent — not because the code was wrong, but because the vocabulary I was thinking in didn&apos;t make the distinctions visible.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;If you don&apos;t have a word for something, you will struggle to model it. I didn&apos;t have operational words for the reactive path. So I didn&apos;t model it.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The moment the vocabulary landed — the moment &quot;event handler&quot; split into &quot;projection,&quot; &quot;policy,&quot; and &quot;saga&quot; as distinct concepts in my working mental model — I could suddenly see the structure that had been invisible. This function is a projection: it has no business logic, it just reshapes data. That function is a policy: it evaluates a condition and issues a command. That other function is actually a saga: it&apos;s tracking state across multiple steps, and the fact that it doesn&apos;t have explicit compensation logic is a bug, not a feature.&lt;/p&gt;
&lt;p&gt;The assessment didn&apos;t teach me something I&apos;d never heard of. It surfaced something I&apos;d heard of but never internalized — because the environment in which I practiced DDD never forced the internalization.&lt;/p&gt;
&lt;p&gt;And that&apos;s the real finding. Not the gap itself, but why the gap existed.&lt;/p&gt;
&lt;p&gt;DDD transmits its deepest knowledge through team osmosis. You sit in a room where a senior practitioner points at a reactive flow and says &quot;that&apos;s a policy, not a saga — here&apos;s why the distinction matters.&quot; You absorb the vocabulary not through reading but through repeated exposure in collaborative contexts where the terms carry operational weight. The naming happens in the room, during modeling sessions, through productive disagreement about what things are.&lt;/p&gt;
&lt;p&gt;If you&apos;ve never been in that room — if you&apos;re a solo practitioner, or you work on a team where nobody has that vocabulary, or you crossed into DDD from a team that practiced it implicitly without naming the patterns — those concepts stay in the reading layer. You know the words. You can define them on a whiteboard. You just don&apos;t reach for them when you&apos;re actually modeling, because they never crossed from knowledge into instinct.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;DDD transmits its deepest knowledge through team osmosis. If you&apos;ve never been in the room where the naming happens, the concepts stay in the reading layer.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This isn&apos;t a gap that reading fixes. I&apos;d read about policies and sagas. The books were on my shelf. The gap persisted because reading is not the same as being challenged to apply the vocabulary under pressure, in the context of a real system, where someone asks &quot;is that a policy or a saga?&quot; and you have to answer with consequences.&lt;/p&gt;
&lt;p&gt;The structured assessment recreated that challenge. It forced me to walk through my own architecture and name things — not in the abstract, but against real systems I&apos;d built. And when the naming forced a distinction I&apos;d never made, the gap was visible.&lt;/p&gt;
&lt;p&gt;One gap. Found in days. After being invisible for my entire career.&lt;/p&gt;
&lt;p&gt;That tells me something about DDD&apos;s accessibility problem that goes beyond the solo-builder framing I&apos;ve been writing about. The problem isn&apos;t just that solo builders lack teams. It&apos;s that the methodology&apos;s most important knowledge transfers through a channel — collaborative practice with experienced practitioners — that has no fallback. If the channel isn&apos;t available, the knowledge doesn&apos;t transfer. Not because the practitioner is lazy or the books are bad. Because some knowledge only crystallizes under pressure, and pressure requires a counterpart.&lt;/p&gt;
&lt;p&gt;The assessment I ran proved that the counterpart doesn&apos;t have to be a human team in a physical room. But it does have to exist. And the fact that this gap persisted — a gap the community had named, published about, and built tooling around — suggests that a lot of practitioners are carrying similar invisible gaps right now.&lt;/p&gt;
&lt;p&gt;Not because they haven&apos;t studied. Because the transfer mechanism DDD depends on doesn&apos;t reach them.&lt;/p&gt;
&lt;p&gt;What I&apos;m sitting with now is a follow-up question: if one structured assessment can surface a long-standing gap in a few days, what would a systematic methodology for this kind of diagnostic look like? Not a coaching plan. Not a reading list. A repeatable discipline that any architect can apply to their own understanding and their own systems.&lt;/p&gt;
&lt;p&gt;I have some ideas. More on that next.&lt;/p&gt;
&lt;p&gt;This is the third post in a series on rigorous domain modeling without a team. Previously: &lt;a href=&quot;https://listenrightmeow.hashnode.dev/ddd-has-a-solo-builder-problem-and-nobody-talks-about-it&quot;&gt;DDD Has a Solo-Builder Problem&lt;/a&gt; and &lt;a href=&quot;https://listenrightmeow.hashnode.dev/knowledge-crunching-doesnt-need-a-room&quot;&gt;Knowledge Crunching Doesn&apos;t Need a Room&lt;/a&gt;.&lt;/p&gt;
</content:encoded></item><item><title>Knowledge Crunching Doesn&apos;t Need a Room</title><link>https://blog.coada.dev/knowledge-crunching-doesnt-need-a-room/</link><guid isPermaLink="true">https://blog.coada.dev/knowledge-crunching-doesnt-need-a-room/</guid><description>Most people think the hard part of domain modeling is learning the patterns. Aggregates, entities, value objects, domain events, repositories — the tactical toolbox. There are books, courses, conferen</description><pubDate>Tue, 10 Mar 2026 08:35:16 GMT</pubDate><content:encoded>&lt;p&gt;Most people think the hard part of domain modeling is learning the patterns. Aggregates, entities, value objects, domain events, repositories — the tactical toolbox. There are books, courses, conference talks, and GitHub repos dedicated to teaching you the building blocks.&lt;/p&gt;
&lt;p&gt;The building blocks aren&apos;t the hard part.&lt;/p&gt;
&lt;p&gt;The hard part is the moment someone says &quot;we approve claims&quot; and you have to decide whether approval is a command on the Claims aggregate, a policy triggered by an upstream event, a saga that orchestrates across multiple contexts, or all three depending on the claim type and the regulatory jurisdiction.&lt;/p&gt;
&lt;p&gt;The hard part is drawing a bounded context boundary on Tuesday that feels clean and coherent, then realizing on Thursday that an invariant you didn&apos;t consider means two things you separated actually need transactional consistency — which means they belong in the same aggregate, which means your context boundary is in the wrong place, which means the event contract you designed between those contexts no longer makes sense.&lt;/p&gt;
&lt;p&gt;The hard part is holding six bounded contexts in your head simultaneously, knowing that a decision in Context Three depends on something you resolved in Context One, and that both cascade into how Context Five consumes events. Every boundary decision has downstream consequences. Every downstream consequence can invalidate upstream decisions. The model is a web of interdependencies, and the only way to validate it is to trace through the whole thing at once.&lt;/p&gt;
&lt;p&gt;This is what Eric Evans meant by &lt;a href=&quot;https://www.domainlanguage.com/ddd/&quot;&gt;knowledge crunching&lt;/a&gt;. Not learning patterns — resolving ambiguity. Two or more people sitting with a domain until the model gets sharp enough to survive contact with reality. The patterns are the vocabulary. Knowledge crunching is the conversation.&lt;/p&gt;
&lt;p&gt;And that conversation is relentless. It&apos;s not a meeting you schedule. It&apos;s the continuous, grinding process of someone saying &quot;what about this case?&quot; and the model either absorbing the question or breaking under it. When it breaks, you reform it. When it absorbs, you move to the next question. The model converges through pressure.&lt;/p&gt;
&lt;p&gt;Here&apos;s what that pressure actually looks like in practice.&lt;/p&gt;
&lt;p&gt;I was working through a domain specification for a system with six bounded contexts. Somewhere around the third context, the model exceeded what I could hold in working memory simultaneously. Not because the individual pieces were complex — each context was tractable in isolation. But the integration points between them weren&apos;t. The aggregate boundaries in Interpretation depended on decisions made in Ingestion. Those decisions had downstream consequences for how Structural Analysis consumed events. And Structural Analysis fed back into the gap report that could invalidate the original Interpretation boundaries.&lt;/p&gt;
&lt;p&gt;In a team, this is where you call a meeting. Get the owners of each context in a room. Walk the whiteboard. Each person holds their slice deeply and the adjacent slices loosely. The integration points live in the conversation between people. That&apos;s the whole point of the room — to synchronize what&apos;s in six different heads into one coherent model.&lt;/p&gt;
&lt;p&gt;The room works because it distributes the cognitive load. No one person needs to hold the full model. The team holds it collectively, and the conversation is the synchronization mechanism. When someone challenges a boundary, the person who owns the adjacent context can immediately respond with &quot;that breaks my event contract&quot; or &quot;that actually solves a problem I&apos;ve been sitting on.&quot; The model gets refined through distributed challenge.&lt;/p&gt;
&lt;p&gt;Without the room, the cognitive load doesn&apos;t distribute. It stacks. Every context, every boundary, every event flow, every cascade sits in one head. And working memory has hard limits. So you do what any reasonable person does — you simplify. You make the boundary decision that feels right for the context you&apos;re focused on, without fully tracing the cascade to all six contexts. You defer the integration questions. You move forward.&lt;/p&gt;
&lt;p&gt;And then you discover the problems during implementation, when the cost of correction is highest. Exactly the failure mode DDD was designed to prevent.&lt;/p&gt;
&lt;p&gt;This is the structural problem underneath the solo-builder problem I wrote about last time. It&apos;s not just that solo builders lack the team. It&apos;s that the specific thing the team provides — distributed cognitive load under continuous challenge — is the mechanism that makes domain models converge. Without it, models don&apos;t get wrong. They get shallow. They survive the easy questions and collapse on the hard ones.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Without it, models don&apos;t get wrong. They get shallow. They survive the easy questions and collapse on the hard ones.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;So the actual question isn&apos;t &quot;how do you do DDD alone.&quot; It&apos;s &quot;how do you recreate the conditions under which ambiguity gets resolved.&quot; You need sustained challenge across the full model. You need a counterpart that holds the complete state while you interrogate specific corners of it. You need someone who asks &quot;what happens when step three fails after steps one and two have already committed?&quot; at the exact moment you&apos;re tempted to model a saga as a simple policy because the simpler pattern is faster to think through.&lt;/p&gt;
&lt;p&gt;This is what I discovered — accidentally, honestly — when I started using AI as a domain modeling collaborator. Not a code generator. Not an architecture advisor. A collaborator in the Evans sense: a counterpart in the knowledge crunching process.&lt;/p&gt;
&lt;p&gt;What surprised me wasn&apos;t that it could discuss DDD patterns. Any LLM can recite the difference between a saga and a policy. What surprised me was what happened when the model got large.&lt;/p&gt;
&lt;p&gt;When I challenged a boundary decision in the third context, the AI didn&apos;t respond to that question in isolation. It held the full state of all six contexts — every aggregate, every event flow, every policy trigger — and surfaced the cascade. &quot;If you move this aggregate, it breaks this event contract in Context Two, which means this saga in Context Five needs to change from choreography to orchestration.&quot; The full trace. Immediately. Without being asked to check.&lt;/p&gt;
&lt;p&gt;No human collaborator does that. Not because they aren&apos;t capable, but because working memory has limits. In a team of six, each person holds their context deeply and the adjacent contexts loosely. The integration points live in the gaps between people&apos;s heads. That&apos;s why you need the room — to synchronize those gaps. The AI doesn&apos;t have gaps. It holds the full model at the same fidelity across all contexts, all the time.&lt;/p&gt;
&lt;p&gt;This changed the nature of the collaboration in a way I didn&apos;t anticipate.&lt;/p&gt;
&lt;p&gt;In a team, challenging a decision carries social cost. You&apos;re questioning someone&apos;s judgment. Even in healthy teams, there&apos;s friction. So you pick your battles. You challenge the decisions that feel most wrong and let the merely suboptimal ones pass. That&apos;s rational. You can&apos;t challenge everything.&lt;/p&gt;
&lt;p&gt;With an AI collaborator, the cost of challenge drops to zero. Every decision gets pressure-tested. Every boundary gets interrogated. Not once — dozens of times, across dozens of decisions, without anyone getting tired or defensive. You end up in genuine friction: you challenge a boundary, the AI pushes back with a structural argument, you counter with a domain argument, the model sharpens. That cycle repeats until the model absorbs the pressure or you explicitly decide to defer.&lt;/p&gt;
&lt;p&gt;I&apos;ll give you one specific moment that stuck with me.&lt;/p&gt;
&lt;p&gt;The specification had a multi-step refinement flow. I&apos;d modeled it as a policy — &quot;when this event arrives, do this thing.&quot; Quick. Clean. Easy to reason about. The AI challenged the classification and walked through a failure scenario: what happens when step three fails after steps one and two have already committed? A policy doesn&apos;t track state across steps. A policy doesn&apos;t compensate for partial completion. That&apos;s a saga.&lt;/p&gt;
&lt;p&gt;I knew the difference between policies and sagas. I&apos;d implemented both. But in the flow of modeling, without someone in the room to ask &quot;what happens when it fails?&quot;, I&apos;d reached for the simpler pattern. Not out of ignorance — out of the natural tendency to simplify when there&apos;s no one applying counter-pressure in real time.&lt;/p&gt;
&lt;p&gt;That single challenge — one question about a failure path — changed the specification&apos;s reactive architecture. And it was the kind of question that should surface during modeling but often doesn&apos;t — even in teams.&lt;/p&gt;
&lt;p&gt;This is the part that&apos;s uncomfortable to admit. The room doesn&apos;t always work. Teams have the potential to provide sustained structural challenge, but in practice, that potential goes unrealized more often than anyone wants to acknowledge. Meetings run long. The whiteboard session covers the happy path and someone says &quot;we can figure out failure handling during implementation.&quot; The saga-versus-policy distinction doesn&apos;t get raised because the person who would have caught it is focused on their own context, or didn&apos;t attend, or didn&apos;t push hard enough because the meeting was already over time.&lt;/p&gt;
&lt;p&gt;And then someone discovers the problem six weeks into implementation, when step three fails and steps one and two have already committed and there&apos;s no compensation logic because nobody modeled the failure path. The rework isn&apos;t just code — it&apos;s architecture. The cost multiplier is enormous. That&apos;s the painful reality of DDD in practice: the methodology is designed to surface these questions early, but the delivery mechanism — team collaboration under time pressure — frequently lets them through.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The methodology is designed to surface these questions early, but the delivery mechanism — team collaboration under time pressure — frequently lets them through.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Over the course of that specification, I resolved over thirty structural gaps through this kind of back-and-forth. Not gaps the AI found on its own — gaps that surfaced because the collaboration forced both sides to be precise. The AI would propose a boundary. I&apos;d reject it because it violated a domain invariant I hadn&apos;t articulated yet. The act of articulating it refined the model. The model converged through pressure. Exactly the way Evans described. Just without the room.&lt;/p&gt;
&lt;p&gt;I want to be honest about what this isn&apos;t. It&apos;s not a replacement for a team. A team brings diverse domain expertise — people who&apos;ve lived inside a business for years and carry implicit knowledge no specification captures. A team brings political context about organizational boundaries that should inform where bounded contexts land. A team brings the kind of cross-functional challenge where a product owner says &quot;that&apos;s not how the business works&quot; and the model changes in a direction no architect would have found alone.&lt;/p&gt;
&lt;p&gt;But for the structural discipline — the systematic challenging of every boundary, every classification, every event flow against the full model state — an AI collaborator with perfect recall and zero social cost of disagreement is more effective than I expected. Not because it knows the domain. Because it doesn&apos;t let you skip the hard questions.&lt;/p&gt;
&lt;p&gt;The conversation is different than what anyone expected. It&apos;s not &quot;AI generates domain models.&quot; It&apos;s &quot;AI recreates the forcing function that makes domain models converge.&quot;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;It&apos;s not &quot;AI generates domain models.&quot; It&apos;s &quot;AI recreates the forcing function that makes domain models converge.&quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The methodology still works. The collaboration just looks different.&lt;/p&gt;
&lt;p&gt;This is the second post in a series on rigorous domain modeling without a team. Next: what a decade of DDD practice missed — and how a structured assessment surfaced it.&lt;/p&gt;
</content:encoded></item><item><title>DDD has a solo-builder problem, and nobody talks about it.</title><link>https://blog.coada.dev/ddd-has-a-solo-builder-problem-and-nobody-talks-about-it/</link><guid isPermaLink="true">https://blog.coada.dev/ddd-has-a-solo-builder-problem-and-nobody-talks-about-it/</guid><description>Every serious architect I know agrees: if you&apos;re building software for a complex domain, Domain-Driven Design is the gold standard. Evans was right. Vernon was right. The methodology works.
But here&apos;s</description><pubDate>Mon, 09 Mar 2026 05:57:02 GMT</pubDate><content:encoded>&lt;p&gt;Every serious architect I know agrees: if you&apos;re building software for a complex domain, Domain-Driven Design is the gold standard. Evans was right. Vernon was right. The methodology works.&lt;/p&gt;
&lt;p&gt;But here&apos;s the thing nobody says out loud — it was designed for teams.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Knowledge crunching, the beating heart of DDD, is a collaborative act. Evans describes it as continuous conversation between developers and domain experts. Two or more humans in a room, wrestling with ambiguity until a shared model emerges. The developer learns the domain. The expert learns to see their own knowledge structured differently. The model is the artifact of that collision.&lt;/p&gt;
&lt;p&gt;Event storming? You need a wall, sticky notes, and a room full of people who disagree productively. Context mapping? That&apos;s a negotiation between teams about how their models relate. Strategic design? It assumes you have multiple bounded contexts owned by multiple groups, and the hard part is drawing the boundaries between them.&lt;/p&gt;
&lt;p&gt;Every foundational practice in DDD assumes you&apos;re not alone.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;So what happens when you are?&lt;/p&gt;
&lt;p&gt;There are thousands of architects building serious systems by themselves. Startup founders. Solo technical leads. Indie builders working on domains complex enough to warrant DDD — healthcare, finance, logistics, compliance — but without a cross-functional team to practice it with.&lt;/p&gt;
&lt;p&gt;They know DDD is the right approach. They&apos;ve read the blue book. Probably the red book too. They can recite the tactical patterns in their sleep: aggregates, entities, value objects, domain events, repositories.&lt;/p&gt;
&lt;p&gt;But the strategic work — the part that actually matters — requires a kind of collaboration they don&apos;t have access to.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;What most of them do instead is predictable and understandable. They adopt the tactical patterns without the strategic discipline. They define aggregates without doing bounded context analysis. They emit domain events without modeling policies or sagas. They build what looks like DDD from the outside but is really CRUD with fancier names.&lt;/p&gt;
&lt;p&gt;This isn&apos;t a criticism. It&apos;s a structural problem with the methodology&apos;s delivery mechanism.&lt;/p&gt;
&lt;p&gt;DDD transfers knowledge through team osmosis. The junior developer absorbs strategic design instincts by sitting in rooms where those conversations happen. The mid-level architect internalizes bounded context boundaries by watching senior practitioners negotiate them across teams. The vocabulary gets transmitted through practice, not just through books.&lt;/p&gt;
&lt;p&gt;If you don&apos;t have the room, the team, or the senior practitioner next to you, whole layers of the methodology become invisible. Not wrong — invisible. You don&apos;t know what you&apos;re missing because the gaps are in the implicit knowledge that DDD assumes you&apos;ll acquire through collaboration.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;DDD transfers knowledge through team osmosis. If you don&apos;t have the room, the team, or the senior practitioner next to you, whole layers of the methodology become invisible.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;em&gt;I&apos;ll give you one example from my own experience.&lt;/em&gt; I&apos;ve been practicing DDD for over a decade. I&apos;d have called myself competent. Comfortable with aggregates, event sourcing, CQRS, the whole stack.&lt;/p&gt;
&lt;p&gt;Then I ran a structured assessment on my own understanding and found a gap I didn&apos;t know existed: I had no vocabulary for the reactive path.&lt;/p&gt;
&lt;p&gt;My command-side architecture was clean. Domain-organized. Rigorous language. But the event-handling side — the part where &quot;something happened&quot; turns into &quot;decide what to do about it&quot; — was an undifferentiated mass of event handlers. No structural distinction between a policy, a saga, and a projection. All lumped under &quot;event processing.&quot;&lt;/p&gt;
&lt;p&gt;I wasn&apos;t modeling half of my system. I was just coding it.&lt;/p&gt;
&lt;p&gt;That gap didn&apos;t come from laziness or lack of reading. It came from never having sat in a room where someone pointed at a reactive flow and said &quot;that&apos;s a policy, that&apos;s a saga, and here&apos;s why the distinction matters.&quot; The concepts existed in the literature. The community had named these patterns clearly. But without the team context where that vocabulary gets transmitted through practice, I&apos;d been building around the gap for years without seeing it.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;That&apos;s one architect. One gap. Found after a decade.&lt;/p&gt;
&lt;p&gt;Now multiply that across every solo practitioner who&apos;s doing their best with DDD but missing the collaborative forcing functions that the methodology was designed to run on.&lt;/p&gt;
&lt;p&gt;The patterns are accessible. The books are available. The conference talks are on YouTube. None of that replaces the thing DDD actually depends on: two or more people challenging each other&apos;s understanding of a domain until the model gets sharper.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;This raises a question I&apos;ve been sitting with.&lt;/p&gt;
&lt;p&gt;Is DDD&apos;s accessibility barrier the methodology itself? Or is it the assumption that practicing it requires a team?&lt;/p&gt;
&lt;p&gt;Because if it&apos;s the latter — if the hard part isn&apos;t the concepts but the delivery mechanism — then the question isn&apos;t how to simplify DDD. It&apos;s how to give solo practitioners access to the collaborative pressure that makes it work.&lt;/p&gt;
&lt;p&gt;That&apos;s not a tooling question. It&apos;s not about generating code or automating boilerplate. It&apos;s about whether you can recreate the conditions under which domain models get refined: sustained challenge, systematic feedback, and a counterpart that holds the full model in working memory while you interrogate specific corners of it.&lt;/p&gt;
&lt;p&gt;I don&apos;t think the answer is &quot;just find a team.&quot; For a lot of builders, the team isn&apos;t an option. The domain is still complex. The system still needs to be built right.&lt;/p&gt;
&lt;p&gt;DDD has a solo-builder problem. And until we acknowledge it, thousands of architects will keep building halfway — tactical patterns without strategic discipline, domain events without the conversations that give them meaning.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The methodology works. The question is who gets to use it.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;em&gt;This is the first post in a series exploring what rigorous domain modeling looks like when you don&apos;t have a team in the room. More soon.&lt;/em&gt;&lt;/p&gt;
</content:encoded></item></channel></rss>