Until now, managing risks related to AI deployment largely meant controlling the content employees typed into a chat box. That era, however, is drawing to a close. AI is quickly transforming from an isolated tool to something interwoven into every corner of an organization’s systems. While commentators have speculated for over a year now about the risks of “agentic AI” operating across enterprise infrastructure, a technology that will enable it, Model Context Protocol (“MCP”) is already here. Before green lighting an MCP integration, it is critical to understand what they are, what purpose they serve, and what mitigating measures are available to address the array of significant risks.
What Is MCP?
MCP is an open standard that allows AI tools like Claude or ChatGPT to connect to other systems, like document repositories, CRMs, internal knowledge bases, or communications tools. An MCP server serves as the bridge between the AI tool and the connected system, exposing defined data and functions through a standardized interface.
With MCP configurations in place, the AI tool can retrieve information, call third-party APIs, interact with repositories, or initiate actions in other software. For the user the workflow looks much the same as any other AI chat interaction: the user submits a prompt, and the AI tool can draw not only on its ordinary sources but also on the connected systems, returning information in the chat or reporting back after initiating an action.
Why MCP Servers Introduce Risk
Legal teams have been bombarded for years now with the same repeated question: “Can we input this data into AI?” In some cases, sanctioning an MCP server is the equivalent of replying “yes” to all of those requests at once.
Connecting an MCP server may introduce a number of novel risks, including:
Considerations
MCP integrations offer tangible operational value, potentially transforming how organizations interact with their data. But they simultaneously introduce legal and operational risks that warrant a more rigorous governance process than a typical AI-implementation rollout. Understanding these risks and how to mitigate them will be crucial as these connections become an increasingly common component of enterprise IT. Organizations should put serious thought into governance and guardrails before connecting new MCP servers or otherwise integrating systems housing sensitive data with AI.
In advance of MCP deployments, consider:
- Vetting your MCP servers: Understand where each server runs, who maintains it, and where data goes when a user runs a query (including any third-party intermediaries).
- Inventorying your data: Identify personal information, partner-owned data, sensitive data, and other confidential business information in the systems you plan to connect.
- Reviewing your contracts and privacy policies: Review partner agreements, vendor contracts, DPAs, and privacy policies and document any relevant restrictions on how data may be processed or transferred.
- Documenting the decision path: Determine who approves MCP integrations, what minimum facts are required (data types, retention, access controls, logging), and what conditions must be met before expanding scope.
- Scoping access tightly: Examine whether to use allowlists or denylists to control what data is accessible through MCP—e.g., blocking new data by default until approved on a case-by-case basis. Consider means to restrict bulk extraction.
- Disabling write access: MCPs that are configured to allow an AI to take actions (rather than “read only”) increase operational risks significantly.
- Implement a logging protocol: Consider whether to log all queries run, data retrieved, and actions taken through MCP connections or take a tiered approach. For any such logs, consider how you will protect and access that repository as necessary.
- Training employees: Ensure employees understand organizational rules surrounding MCP use and are aware of the relevant risks.
