We Don't Need Another EA Tool

A Bias I Should Declare

Before I share this idea, a confession: I’ve been thinking about this problem for years. I even built something - a Scala-based DSL called Townplanner that lets you express enterprise architecture as code. Version-controlled, with compile-time validation, exportable to diagrams and documentation. It was an experiment, a way to test whether “architecture as code” could actually work in practice.

It’s not a commercial product. It’s not even particularly polished. But building it taught me something: the moment you treat architecture models as code, everything changes. You get pull requests for architecture decisions. You get IDE support and compile-time integrity checks. You get a diff when something changes.

I mention this not to pitch a tool, but to explain where my thinking comes from. I’m biased toward this direction. Take what follows with that context.

Three Inspirations Converging

At my current organisation, three initiatives have been running in parallel. Watching them unfold has crystallised something.

Inspiration 1: the tool search. We’ve been evaluating replacements for our ageing EA tool. Sparx Enterprise Architect has served us well, but it’s showing its age. So we looked at the market - and at the 2025 Gartner® Magic Quadrantâ„¢ for Enterprise Architecture Tools. The Leaders quadrant is populated by the usual suspects: SAP LeanIX, Ardoq, Bizzdesign, Orbus Software. Some of them even have knowledge graphs under the hood. And yet, the story is the same: rigid meta-models that force you into their way of thinking. We don’t want to force our architecture practice into someone else’s ontology. We want to model our reality. And every vendor demo reminded me of Chief Brody in Jaws: “You’re gonna need a bigger budget.”

Inspiration 2: the eternal cleanup. In parallel, we’ve been running a long-lasting effort to clean up our current repository. Years of accumulated models, diagrams, and documentation that have drifted from reality. The honest assessment: it’s not working. People don’t want to do it - and who can blame them? It’s tedious work with no visible reward. And here’s the painful truth: even if we succeed, it will get dirty again. We’re not solving a problem; we’re maintaining an illusion.

Inspiration 3: Data Product thinking. Elsewhere in the organisation, we’ve been exploring Data Mesh concepts - domain ownership, data as product, federated governance. Different context entirely. Nothing to do with EA.

And then these initiatives started making me believe that something I’d been experimenting with for years actually makes sense. What if the Enterprise Architecture Repository isn’t a static artifact, hidden in some esoteric tool? What if it should be a data product? And what if the way to keep it alive isn’t discipline and cleanup sprints, but code, automation, and live data?

Enterprise Architecture as a data product, expressed in code, enriched with live data, queryable by humans and machines alike. I love it when a plan comes together.

Architecture as Code

I’ve always believed architects should write code. Not because we need to prove something, but because code is precise, versionable, testable, and executable. When I describe an integration pattern in prose, there’s room for interpretation. When I express it in code, there isn’t. Gregory Hohpe makes this case compellingly in 37 Things One Architect Knows About IT Transformation, particularly in the chapter “Code Fear Not!” - where he observes that configuration is really just “programming in a poorly designed language without tool support and testing.”

What if the architecture of an organisation could be expressed in a Domain-Specific Language, one that reflects your meta-model, not a vendor’s generic one?

Now, building a custom DSL used to be a significant undertaking. Martin Fowler wrote an entire book on the subject back in 2010, and for good reason: it required expertise in parsing, abstract syntax trees, and code generation. But here’s what’s changed: LLMs have fundamentally altered the economics of custom tooling. As Fowler recently observed, LLMs represent a shift in abstraction as significant as the move from assembler to high-level languages. The tedious parts - writing parsers, generating output formats, building integrations - are exactly what LLMs excel at. With GenAI, “build your own EA tool” isn’t a ludicrous proposition. It’s a weekend project with a good coding assistant.

For Townplanner, I chose Scala, specifically Scala 3 with its support for context functions and infix notation. This allows for a DSL that reads almost like natural language while retaining full type safety. Here’s what such a model might look like:

val customerDataManagement = capability("Customer Data Management"):
  domain(Marketing)
  owner(team("Customer Intelligence"))
  
  targetOperatingModel(Replication):
    rationale:
      """Each business unit requires local customer data ownership
        |while maintaining a golden record for compliance and
        |cross-selling analytics.""".stripMargin

val cdpBuildingBlock = architectureBuildingBlock("Customer Data Platform"):
  realizes customerDataManagement
  classification(Strategic)
  
  principle("Single Customer View"):
    """All customer touchpoints consolidated into unified profiles,
      |enabling consistent experience across channels.""".stripMargin
  
  principle("Real-time Activation"):
    "Segment membership updated within 15 minutes of behavioral signals."

system("Segment CDP"):
  realizes cdpBuildingBlock
  vendor("Twilio Segment")
  status(Production)
  hosting(aws.managed("segment-prod-eu-west-1"))
  
  integrates "Salesforce CRM" via api("/sources/salesforce/contacts")
  integrates "Marketing Automation" via event("identify.completed")
  integrates "Data Warehouse" via reverseEtl(schedule = "*/15 * * * *")
  integrates "Website" via sdk("analytics.js")

This isn’t pseudocode for illustration. It’s a sketch of what a company-specific architecture DSL might look like. Notice how it captures the full EA stack: business capability with its target operating model and rationale, the architecture building block that realises it, and the concrete IT system that implements it. The meta-model is yours. The relationships are explicit. And because it’s Scala, you get IDE support, compile-time validation, and a rich ecosystem to build upon.

From Static Model to Knowledge Graph

A DSL gives us precision, but the real power comes from what we build on top of it. Parse that DSL into a knowledge graph, and suddenly architecture becomes queryable:

“Which systems would be affected if we deprecated this API?” “Show me all integrations that cross domain boundaries without going through our integration platform.” “What capabilities have no designated owner?”

These questions are hard to answer with traditional EA tools. With a knowledge graph and an LLM interface, they become conversational.

Live Enrichment

Since we’re in an environment we control, building our own data product, that knowledge graph doesn’t have to be static. It can be enriched with live data:

  • Cloud asset inventories - What’s actually deployed? Does it match what we designed?
  • Observability platforms - Which integrations are actively used? What are the actual traffic patterns?
  • CI/CD metadata - When was this component last deployed? By whom?
  • Cost data - What does this capability actually cost to run?

Suddenly, the architecture model isn’t just a description of intent. It’s a living view that includes reality.

Fitness Functions

In evolutionary architecture, fitness functions are automated checks that validate whether your system still meets its architectural goals. We typically apply them at the system level, but what if we applied them at the enterprise level?

  • “All external integrations must go through the API Gateway” - Check the actual traffic.
  • “No direct database connections between domains” - Validate against the cloud configuration.
  • “Every business-critical system must have a designated owner” - Query the knowledge graph.

The as-designed meets the as-built. Continuously. Automatically.

When they diverge - and they will - you have a choice: update the architecture to reflect the new reality, or address the drift. Either way, you’re making a conscious decision rather than letting documentation silently rot.

What Becomes Possible

Once you start thinking of architecture as a data product, new possibilities open up.

Architecture for everyone, not just architects.

Today, if a product owner wants to understand the integration landscape around their domain, they typically have two options: dig through Confluence pages of varying freshness, or ask an architect. Both create friction. Both reinforce the perception that architecture knowledge lives in an ivory tower, dispensed by specialists. “What we’ve got here is failure to communicate.”

But if architecture is a queryable data product? Connect it to PowerBI or your dashboarding tool of choice. Suddenly a delivery manager can see which systems their team depends on. A security officer can view all external integrations without filing a request. A finance lead can correlate architecture decisions with cost data.

The EA team doesn’t disappear - but their role shifts. Less gatekeeping, more curation. Less “come to us with questions,” more “here’s a self-service capability we maintain for you.”

A shared language across the organisation.

In Domain-Driven Design, we talk about ubiquitous language - a shared vocabulary between technical and business people within a bounded context. It’s powerful when it works. But it rarely scales beyond individual teams.

What if your architecture DSL was that ubiquitous language, but at the enterprise level? The same terms used in strategic planning (“Customer Data Management capability”) appearing in the code, in the dashboards, in the integration contracts. Business, strategy, and IT literally speaking the same language - not because someone wrote a glossary, but because the language is encoded in a living model that everyone references.

This isn’t just semantic tidiness. It reduces the translation errors that plague large organisations. When the board discusses “customer onboarding,” they’re referencing the same node in the knowledge graph that engineering uses when they deploy changes to it.

EA as a team among teams.

In the Data Mesh world, domain teams own their data products. They have consumers. They think about quality, discoverability, usability. They’re accountable.

What if the EA team operated the same way? Architecture knowledge as a product, with the rest of the organisation as consumers. You’d measure adoption. You’d care about developer experience. You’d iterate based on feedback.

This reframes enterprise architecture from a governance function to a platform function. You’re providing a capability that makes other teams more effective. If nobody’s using your architecture data product, that’s a signal. Not that people don’t appreciate architecture, but that you’re not meeting their needs.

It’s a humbling reframe. And I think a healthy one.

Queryable by machines, not just humans.

We’ve talked about LLMs enabling natural language queries against the knowledge graph. But the consumers don’t have to be human at all.

Imagine your CI/CD pipeline querying the architecture model: “Is this service classified as business-critical? Then enforce these additional quality gates.”

Or your cloud governance tooling: “This deployment is creating a new integration between domains. Flag it for architectural review.”

Or an automated impact analysis when planning infrastructure changes: “Which teams should be notified about this planned maintenance window, based on their dependencies?”

The architecture model becomes infrastructure - something other systems build upon, not just something humans consult.

What This Isn’t (And What EA Still Is)

I want to be careful here. This idea is about how we manage architecture knowledge - not about the discipline of enterprise architecture itself.

Enterprise architects do crucial work that no data product can replace. Stakeholder management. Facilitating difficult conversations between teams with competing priorities. Translating between board-level strategy and technical reality. Sensing organisational dynamics and knowing when to push and when to wait. Helping leaders make decisions with incomplete information. Building trust across silos.

This is deeply human work. It requires judgment, empathy, political awareness, and experience. It’s also, frankly, the hard part of the job - and the part that creates the most value.

The problem is that traditional EA tooling forces architects to spend significant time on something else entirely: maintaining documentation. Updating diagrams. Chasing people for information that’s already stale. Running cleanup initiatives that never quite succeed.

The data product approach isn’t about diminishing the architect’s role. It’s about automating the parts that can be automated - the knowledge capture, the reality-checking, the basic queries - so that architects can focus on the parts that can’t. Let the machines handle the bookkeeping; save the humans for the judgment calls.

If your architecture knowledge is a living, self-updating data product, you’re not spending your Thursday afternoon updating a capability map. You’re spending it helping a product team navigate a difficult integration decision. Or advising leadership on the technical implications of a market shift. Or facilitating a conversation between two domain teams who’ve discovered they’re building the same thing.

That’s the work. The documentation was never the point - it was supposed to support the point.

An Invitation

This is an exploration, not a conclusion. I’m still working through it, still finding the edges and the gaps. If you’re thinking about similar questions, I’d love to hear where your thinking has taken you.

The building blocks exist: knowledge graphs, LLMs, infrastructure-as-code, observability platforms, Data Mesh thinking. Perhaps the next generation of enterprise architecture tool isn’t software you buy, but a capability you build - one that treats architecture knowledge with the same rigour we’ve learned to apply to code and data.

“Where we’re going, we don’t need roads.” Maybe we don’t need another EA tool either.