A Reusable Pattern for Adding Agentic AI to Existing Projects with MCP
I have been building a screenplay tool called Lekhani. The immediate goal was simple enough: add an agentic assistant that can read project state, help the user think through story structure, and eventually propose or commit structured changes.
That should have been an architecture exercise.
Instead, I spent a long time in a swamp of product-specific chat behavior: transition handling, brainstorming loops, follow-up interpretation, correction handling, confirmation handling, and all the small conversational edge cases that appear when a user is speaking naturally inside a creative tool.
That experience clarified something important.
The reusable pattern for adding agentic AI to an existing project is not the same thing as the product-specific assistant behavior layered on top of it.
If you do not separate those two concerns early, the project becomes difficult to reason about very quickly.
The wrong abstraction #
The wrong way to think about this is:
- user says something
- agent interprets it
- agent calls tools
- system state changes
- assistant replies
That looks neat on paper, but in a real product it collapses too many distinct concerns into one loop.
The agent is now expected to do all of the following at once:
- understand natural language
- control conversation state
- know when to read versus write
- decide when a suggestion should become a proposal
- decide when a proposal should become truth
- manage provenance
- reconcile changes with existing application state
That is too much authority for one component.
Once that happens, every bug starts to look like a prompt bug or a model-quality bug, even when the real issue is architectural confusion.
The better split #
The reusable MCP integration pattern is smaller.
You already have an application with some canonical state and operations. Start there.
The pattern is:
- Canonical application state
- MCP tool surface over that state
- Agent with scoped access to those tools
- Proposal and commit boundary
- Provenance around tool effects
That is the reusable layer.
Everything else is product behavior.
In other words:
- integration pattern: how the agent interacts with the existing system
- product behavior: how the assistant talks to the user inside a specific UX
Once I separated those mentally, the architecture became much easier to reason about.
The five parts #
1. Canonical state stays outside the model #
The model should not be the source of truth.
The existing application already has truth somewhere:
- database tables
- domain objects
- documents
- workflow state
- project metadata
That canonical state should remain authoritative.
In Lekhani, that means the screenplay project, ontology entities, links, and eventually lint findings all live in application-owned storage. The assistant reads and acts against that state. It does not become the state.
This sounds obvious, but it is the first thing that gets blurred when people say they want “an AI-native app”.
2. MCP is the boundary, not the product #
MCP should expose the operations the application already knows how to perform.
Examples:
- read current project summary
- list entities
- create a proposal
- commit a confirmed change
- fetch unresolved issues
- draft a document patch
That tool surface is the stable integration layer.
It should not be overloaded with every conversational nuance of the product.
A useful way to think about it is:
- the app owns the domain
- MCP exposes the domain operations
- the agent sequences those operations
This is much better than letting the assistant mutate internals directly.
3. Proposal/commit is the control boundary #
This is the most important pattern.
If the agent can write canonical state directly for every plausible interpretation, the system becomes chaotic.
The right boundary is:
- inference creates proposals
- explicit resolution creates commits
That alone dramatically reduces risk.
For an existing application, this usually means you need at least two classes of tool:
- read tools
- proposal tools
And only sometimes:
- commit tools
The presence of a proposal layer changes the whole system from “model as writer of truth” to “model as participant in a controlled workflow”.
That is the difference between a demo and an architecture.
4. Provenance is not optional #
As soon as an agent can act on project state, you need to be able to answer:
- what changed
- why it changed
- what tool caused it
- what user turn it came from
- whether it is a proposal or a committed change
Without that, debugging becomes almost impossible.
A minimal provenance model usually needs:
- a turn or run identifier
- tool call record
- derived object identifiers
- status: proposed, applied, rejected, superseded
This is where many agent integrations quietly break down. The app works for one happy path, but no one can explain later why a certain state exists.
5. Scoped tool policy matters more than people think #
One of the most useful controls is to avoid exposing the full tool surface on every turn.
The agent should not always see every write operation.
A better pattern is:
- read-only turns get read tools
- proposal turns get read plus proposal tools
- explicit commit turns may get commit tools
This is both safer and easier for the agent.
It reduces the number of bad plans the model can even attempt.
What is not reusable #
The part that is much less reusable is the product-specific conversational control layer.
In my case, this included things like:
- brainstorming settings
- refining one idea over multiple turns
- switching from setting to character work
- deciding whether “looks good” means confirm
- deciding whether “expand more” means elaborate or suggest an alternative
That is not a generic MCP integration pattern.
That is specific to a writing assistant UX.
Treating that layer as though it were part of the reusable architecture was the main source of confusion.
The lesson was simple:
general integration patterns should stay small; product-specific dialogue systems should stay local.
A practical architecture #
If I were starting again, I would structure it like this.
Layer 1: Existing application #
- canonical domain state
- repositories
- services
- validation
- document model
Layer 2: MCP adapter #
- tool schemas
- tool execution
- scoped tool policy
- proposal/commit split
- provenance hooks
Layer 3: Agent loop #
- read state
- decide whether to observe, propose, or commit
- call tools
- summarize the result
Layer 4: Product-specific assistant UX #
- chat behavior
- brainstorming flow
- conversational state
- follow-up handling
- presentation decisions
That last layer is where your product-specific complexity lives.
Do not mistake it for the reusable agent integration architecture.
What I would standardize #
If I were extracting a reusable internal framework for “agentic AI on existing projects”, I would standardize only these interfaces:
DomainStateProviderToolRegistryToolPolicyProposalStoreCommitApplierProvenanceRecorderAgentTurnRunner
That is enough.
Notice what is missing:
- no hardcoded chat flow
- no bespoke dialogue-state machine
- no product-specific UX semantics
Those belong one layer up.
The main failure mode #
The main failure mode is easy to state.
You start by trying to integrate an agent into an app.
Then, without noticing, you start building:
- a dialogue system
- a workflow engine
- a state tracker
- a planner
- a product assistant
- and a domain model
all at once.
At that point, every problem looks interconnected, and it becomes hard to tell what is architecture and what is product behavior.
The way out is to split them again.
The durable lesson #
Agentic integration on an existing project is not about making the model smarter and smarter until it can safely run your app.
It is about building the right boundaries:
- external state
- scoped tools
- proposal before commit
- provenance
- minimal policy around tool access
If those are solid, you can change the model, prompts, or even the product UX without collapsing the whole system.
That is the reusable part.
The chat behavior, brainstorming flow, and assistant personality can change later. The control boundary should not.