Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Join our daily and weekly newsletters for the latest updates and the exclusive content on AI coverage. Learn more
Over the past two years, AI systems have become more able not only to generate text, but to make measures, make decisions and integrate into business systems, they have come with additional complexities. Each AI model has its own owner of interfacing with other software. Each added system creates another integration JAM, and the computer teams spend more time connecting systems than using them. This integration tax is not unique: this is the hidden cost of the landscape of fragmented AI today.
Anthropic Model context protocol (MCP) is one of the first attempts to fill this gap. It offers a clean and stateless protocol for how large language models (LLM) can discover and invoke external tools with coherent interfaces and a minimum friction of the developer. This has the potential to transform the ability of AI isolated into composable workflows and business ready. In turn, it could make integrations standardized and simpler. Is this the panacea we need? Before we walk, let us first understand what MCP is.
At present, the integration of tools in LLM -powered systems is at best ad hoc. Each agent frame, each plugin system and each model supplier tend to define their own way to manage the invocation of the tool. This leads to reduced portability.
MCP offers a refreshing alternative:
If it is widely adopted, MCP could make IA tools discoverable, modular and interoperable, similar to what REST (Representational State Transfer) and Openapi have done for web services.
While MCP is an open source protocol developed by Anthropic And recently gained ground, it is important to recognize what it is – and what it is not. MCP is not yet a formal standard of industry. Despite his open nature and his growing adoption, he is still maintained and guided by a single supplier, mainly designed around the Claude Model family.
A real standard requires more than just open access. There should be an independent governance group, a representation of several stakeholders and a formal consortium to supervise its evolution, its version and any resolution of disputes. None of these elements are in place for MCP today.
This distinction is more than technical. In the recent implementation projects of the company involving task orchestration, document processing and quotation automation, the absence of a shared tool interface layer has surfaced on several occasions as an friction point. The teams are obliged to develop adapters or a double logic between systems, which leads to higher complexity and an increase in costs. Without a neutral and widely accepted protocol, this complexity is unlikely to decrease.
This is particularly relevant in Fragmented AI landscapeWhere several suppliers explore their own owner or parallel protocols. For example, Google announced its Agent2age Protocol, while IBM develops its own agent communication protocol. Without coordinated efforts, there is a real risk of the break -up of the ecosystem – rather than converging, making interoperability and long -term stability more difficult to reach.
Meanwhile, MCP itself is still evolving, with its specifications, its security practices and its actively refined implementation guidelines. The first adopters noted challenges around developer experience,, tool integration and robust securitynone of which is trivial for business quality systems.
In this context, companies must be cautious. While MCP presents a promising direction, critical mission systems require predictability, stability and interoperability, which are better delivered by mature and community standards. The protocols governed by a neutral body guarantee protection against long -term investments, safeguarding adopters of unilateral changes or strategic pivots by any seller.
For organizations evaluating MCP today, it raises a crucial question – How do you adopt innovation without locking in uncertainty? The next step is not to reject the MCP, but to engage with strategically: experimenting where it adds value, isolating dependencies and preparing for a multi-protein future which could still be in flow.
Although MCP experimentation is logical, especially for those who already use Claude, large -scale adoption requires a more strategic objective. Here are some considerations:
If your tools are specific to MCP and only Anthropic takes care of MCP, you are linked to their battery. This limits flexibility as multimodal strategies become more common.
Let LLM invoke the tools independently is powerful and dangerous. Without railings such as reach authorizations, validation of exit and authorization to fine grains, a poorly extended tool could expose systems to handling or error.
The “reasoning” behind the use of the tool is implicit in the output of the model. This makes debug more difficult. Journalization, surveillance and transparency tools will be essential for the use of the company.
Most of the tools today are not MCP. Organizations may need to rework their APIs to comply or build Middleware adapters to fill the gap.
If you build products based on agents, MCP is worth being followed. The adoption must be staged:
These steps preserve flexibility while encouraging architectural practices aligned with future convergence.
Based on experience in corporate environments, a model is clear: the lack of standardized tool model interfaces slows down adoption, increases integration costs and creates an operational risk.
The idea behind MCP is that the models should speak a language consistent with the tools. Prima facie: It is not only a good idea, but necessary. This is a fundamental layer for the way in which future AI systems coordinate, execute and reason in the workflows of the real world. The road to generalized adoption is neither guaranteed nor without risk.
It remains to be seen if MCP becomes standard. But the conversation it causes is that that the industry can no longer avoid.
Gopal Kuppuswamy is co-founder of Cognide.