Designing for Autonomy: What Agentic AI Demands from Enterprise Architecture

7 min read
Agentic AIEnterprise ArchitectureData Strategy

For most of the last two decades, enterprise data architecture has been designed around one core assumption: humans reason, systems move data. Every pipeline, schema, integration pattern, and access control was built for a world where a person or a deterministic process sat at the end of the chain. The system delivered the data. The human decided what to do with it.

That assumption is breaking.

Organizations are moving from AI that answers questions to AI that takes action. Agents that retrieve data on their own, reason over it, chain multiple steps together, and execute decisions with varying degrees of autonomy. This is not theoretical. I am seeing it on real engagements. And the problem is not the agent. The problem is that the architecture underneath it was never designed for this kind of consumer.

The architecture assumes a human on the other end

Traditional enterprise data architectures are built around structured movement. You define a source. You define a destination. You build a pipeline. You specify the schema, the transformations, the schedule, the access controls. Everything is predefined because you know, in advance, what data is needed, where it goes, and what happens when it arrives.

That model works when the consumer is a dashboard, a report, a downstream system, or a human analyst writing a query. It works because the reasoning happens outside the architecture. The architecture just moves bytes.

An agent is a different kind of consumer. It does not follow a predefined path. It decides at runtime what data it needs. It interprets the meaning of fields from metadata. It calls tools, evaluates results, adjusts its approach, and chains actions across multiple systems in a single task. It holds context across steps. It operates with a degree of freedom that no previous consumer of enterprise data has had.

Most organizations are trying to plug agents into architectures that were designed for structured, deterministic data movement. It works for simple use cases. It breaks the moment the agent needs to do anything that was not anticipated in advance. And the whole point of an agent is to handle things that were not anticipated in advance.

The unit of design changes

This is the core shift. Traditional architectures are designed around data flow: how data moves from source to destination. Agentic architectures need to be designed around agent reasoning: how an agent discovers data, understands what it means, decides what to do with it, remembers what it has done, and coordinates with other agents.

Those are different design problems. And they produce different architectural requirements.

When I started working on agentic capability models, first on a client engagement and then extending the thinking on my own, the most striking realization was how many of the domains that matter for agentic AI simply do not exist in traditional data frameworks. I have built capability assessments using standard enterprise data frameworks before. Ten domains, hundreds of capabilities across governance, architecture, engineering, integration, quality, MDM, analytics, security. The usual structure. It is useful for what it was built for.

But when you try to use that same lens to assess an organization's readiness for agentic AI, it does not answer the right questions. It can tell you whether your data is governed and your pipelines are reliable. It cannot tell you whether an agent can discover, understand, and act on that data autonomously.

What autonomy demands from the architecture

When you design for autonomy instead of structured movement, new architectural requirements emerge that traditional frameworks do not cover. A few stand out.

Governed, discoverable connectivity. An agent does not use a predefined pipeline. It needs to discover and connect to systems of record at runtime through governed interfaces. The Model Context Protocol, MCP, is one pattern gaining traction here. The idea is that agents access enterprise systems through a managed gateway with registered tools, authentication, and audit logging. Not hardcoded integrations. Not point-to-point API calls buried in application code. A governed layer that lets agents connect to the systems they need while maintaining control over what they can access and do.

Memory and context persistence. A traditional data architecture does not need to remember what a consumer did last time. An agent does. Short-term memory within a session, long-term memory across sessions, and shared knowledge stores that multiple agents can draw from. This is a new infrastructure requirement. It is not something you can bolt onto an existing data platform. It needs to be designed: what gets stored, how long it persists, who can access it, and how it is governed.

A semantic layer built for machine reasoning. This is where my earlier work on metadata and governance connects directly. When a human analyst looks at a column called cust_seg_cd, they might ask a colleague what it means. An agent cannot do that. It needs a business ontology, governed metric definitions, process context, and entity relationships that are machine-readable and semantically rich. The business glossary that exists as a PDF in SharePoint is not sufficient. The semantic layer becomes infrastructure, not documentation.

Governance for non-human actors. Traditional identity and access management was designed for people. Role-based access control assumes a human with a job title and a set of responsibilities. An agent is a different kind of principal. It needs its own identity, scoped permissions, token delegation, guardrails on what actions it can take, and observability into every decision it makes. Human-in-the-loop authorization for high-risk actions. Content and output guardrails. Behavioral evaluation and regression testing. None of this exists in a traditional RBAC model.

Agent-to-agent coordination. In a multi-agent architecture, agents need to discover each other, delegate tasks, share context, and aggregate results. This requires protocols and patterns that have no equivalent in traditional data integration. It is closer to service-oriented architecture than to ETL, but with the added complexity that the participants are non-deterministic.

None of these are incremental extensions of existing capabilities. They are new domains. That is why plugging an agent into a traditional architecture feels brittle. The architecture was not designed to answer the questions the agent is asking.

This is not a rip and replace

To be clear about what I am claiming and what I am not. I am not saying traditional enterprise data architecture is obsolete. The systems of record, the data platforms, the governance frameworks, the integration patterns, all of that still matters. An agent still needs clean master data, governed metadata, and reliable pipelines. The previous article I wrote about the gap between AI strategy and execution is still true.

What I am saying is that traditional architecture is necessary but not sufficient. It provides the foundation, the systems of record and the data assets that agents connect to. But the layers that sit between those systems and an autonomous agent, the gateway, the memory, the semantic intelligence, the governance for non-human actors, those need to be designed intentionally. They do not emerge on their own from an existing data platform.

The architecture question has changed

The question most organizations are asking is: how do we deploy AI agents on top of our existing data infrastructure? That is the wrong framing. It assumes the existing infrastructure is the fixed point and the agent adapts to it.

The better question is: what does our architecture need to provide for an autonomous agent to discover, reason over, and act on enterprise data reliably and safely?

That question produces a different architecture. One designed for autonomy, not just connectivity. One where the semantic layer, the governed gateway, the memory infrastructure, and the agent governance model are first-class concerns, not afterthoughts.

The enterprise data architectures we have today were built for a world where humans did the reasoning. That was a good design for its time. The consumer has changed. The architecture needs to change with it.

Related Articles