top of page

The Three Layers Your AI Agent Can't See

  • Writer: mehrankimi
    mehrankimi
  • Jan 6
  • 3 min read
Cover
Cover

The Demo Worked. Production Didn't.

The AI agent was impressive. In the pilot, it handled requests in seconds, pulled the right data, and generated outputs that made executives nod approvingly. Six weeks into production, the story changed.


The agent couldn't explain why a certain vendor was approved last year despite missing documentation. It didn't know that "Route 7" internally meant something different than the official designation. It had no idea that Maria in procurement had a standing agreement with a key supplier, one that existed only in email threads and a conversation from 2019.


The agent wasn't broken. It was blind.


And here's the uncomfortable truth most organizations aren't willing to accept: the AI didn't fail them. They failed the AI, years before they ever deployed it.


An Old Problem, Newly Exposed

Here's the thing: AI agents didn't create this problem. They just exposed it faster and more brutally than any technology before.


Government agencies and commercial enterprises have been deploying systems into knowledge-poor environments for decades. The workaround was always the same: humans filled the gaps. Someone called Bob. Someone remembered the exception. Someone knew why that decision was made three years ago.


AI agents don't have that fallback. They either have the information or they don't. There's no institutional memory to lean on, no hallway conversation to reference, no "let me check with Linda."


Anthropic's research puts it clearly: deploying AI for complex tasks "might be constrained more by access to information than underlying model capabilities." The bottleneck isn't the model's intelligence. It's the organization's readiness to feed it what it needs to operate.


The Three Layers Your AI Agent Needs

Organizations hold knowledge in three layers. AI agents require all of them to function. Most organizations are blind to at least one.


Layer 1: Explicit Knowledge. What's documented and digitized. SOPs, policies, databases, official procedures. This is table stakes, and most organizations overestimate how solid they are here. Documents are outdated, scattered, or written in ways that capture what but not how.


Layer 2: Tacit Knowledge. What's known but never written down. Research by Oranga  suggests roughly 90% falls into this category, undocumented and inaccessible. It lives in Slack threads, verbal handoffs, and the heads of your most experienced people. When they leave, it leaves with them.


Layer 3: Decision-Trace Knowledge. Why things happened the way they did. Foundation Capital calls this the "context graph," the missing record of how rules were applied, where exceptions were granted, and which precedents shaped outcomes. Enterprise software captures that a decision was made. It rarely captures why.


Without Layer 1, your AI agent can't act. Without Layer 2, it can't adapt. Without Layer 3, it can't reason.


Most organizations are fixing Layer 1, vaguely aware of Layer 2, and completely blind to Layer 3.


Quick, practical guidance tied to each layer:

  • Layer 1: Audit your documented knowledge. Is it current, centralized, and accessible to systems, not just people?

  • Layer 2: Start capturing tacit knowledge deliberately. Exit interviews, process shadowing, structured knowledge transfers before your experienced people walk out the door.

  • Layer 3: Begin recording decision context, not just decisions. The why behind exceptions, approvals, and precedents needs a home.

AI Agents Three Layers
AI Agents Three Layers

This isn't a technology fix. It's an organizational discipline. And it needs to happen before you deploy, not after your AI agent starts hitting walls.


The Question You Should Be Asking

The companies racing to deploy AI agents are about to learn an expensive lesson. They're buying the most advanced models on the market and feeding them into organizations that can't even tell you why last quarter's exception was approved.

Before your next AI initiative, ask yourself three questions:


  • Can your AI agent access what your best employee knows?

  • Can it understand why your organization made the decisions it made?

  • Can it operate without someone filling the gaps it can't see?


If the answer is no, you're not deploying AI. You're deploying a very expensive mirror, one that will reflect back every knowledge gap you've been ignoring for years.


The bottleneck was never the model.

It was what the model was never given.


At MZI, we help government and enterprise clients close these gaps before they become expensive lessons. Organizational readiness isn't a side project. It's the foundation that makes AI investments actually work.


Sources

  1. Anthropic, Economic Index Report, September 2025. Deploying AI for complex tasks might be constrained more by access to information than underlying model capabilities. https://www.anthropic.com/research/anthropic-economic-index-september-2025-report

  2. Josephine Oranga; Kisii University, Tacit Knowledge Transfer and Sharing: Characteristics and Benefits of Tacit & Explicit Knowledge; ResearchGate, Tribal Knowledge. https://www.researchgate.net/publication/376462964_Tacit_Knowledge_Transfer_and_Sharing_Characteristics_and_Benefits_of_Tacit_Explicit_Knowledge

Jaya Gupta and Ashu Garg, Context Graphs: AI's Trillion-Dollar Opportunity, Foundation Capital, December 2025. https://foundationcapital.com/context-graphs-ais-trillion-dollar-opportunity/


 
 
 

Comments


bottom of page