Why “which API should I call?” » is the wrong question in the LLM era



For decades we have adapted to software. We learned shell commands, memorized HTTP method names, and connected SDKs. Each interface assumed that we would speak It is language. In the 1980s, we typed “grep,” “ssh,” and “ls” into a shell; in the mid-2000s we invoked REST endpoints like GET /users; in the 2010s we imported SDKs (client.orders.list()), so we didn’t have to think about HTTP. But each of these steps was based on the same principle: exposing the abilities in a structured form so that others could invoke them.

But now we are enter the next interface paradigm. Modern LLMs challenge the idea that a user must choose a function or memorize a method signature. Instead of “Which API should I call?” » the question becomes: “What result am I trying to achieve?” » In other words, the interface moves from code → to language. In this shift, the Model Context Protocol (MCP) emerges as the abstraction that allows models to interpret human intent, discover capabilities, and execute workflows, effectively exposing software functions not as programmers know them, but as natural language queries.

MCP is not a fashionable term; several independent studies identify the architectural change required for invoking a “consumable LLM” tool. A blog by Akamai Engineers describes the transition from traditional APIs to “language-based integrations” for LLMs. Another academic article on “AI Agent Workflows and Enterprise APIs” explains how enterprise API architecture must evolve to support goal-oriented agents rather than human-driven calls. In short: we no longer just design APIs for code; we design capabilities for intent.

Why is this important for businesses? Because businesses are drowning in internal systems, proliferation of integrations and user training costs. Workers struggle not because they have no tools, but because they have too many tools, each with its own interface. When natural language becomes the primary interface, the barrier of “which function should I call?” » disappears. A recent business blog observed that natural language interfaces (NLI) allow marketers to self-service data, which previously had to wait for analysts to write SQL. When the user simply declares their intent (like “retrieve last quarter’s revenue for region

Natural language does not become a convenience, but an interface

To understand how this evolution works, consider the interface scale:

Era

Interface

Who it was built for

CLI

Shell commands

Expert users entering text

API

Web or RPC endpoints

Developers integrating systems

SDK

Library functions

Programmers using abstractions

Natural language (MCP)

Intent-based queries

Human agents + AI reporting What they want

At each stage, humans had to “learn the language of the machine.” With MCP, the machine absorbs human language and elaborates the rest. This isn’t just a UX improvement, it’s an architectural change.

Under MCP, the code functions are still there: data access, business logic and orchestration. But they are discovered rather than invoked manually. For example, rather than calling "billingApi.fetchInvoices(customerId=…)," you say “View all Acme Corp invoices since January and highlight any late payments”. The model resolves entities, calls the right systems, filters, and returns structured information. The developer’s work moves from cabling termination points to defining capacity surfaces and guardrails.

This change transforms developer experience and business integration. Teams often struggle to integrate new tools because they require mapping schematics, writing sticky code, and user training. With natural language, integration involves defining business entity names, declaring capabilities, and exposing them through the protocol. The human (or AI agent) no longer needs to know parameter names or the order of calls. Studies show that using LLMs as interfaces to APIs can reduce the time and resources needed to develop chatbots or tool-invoked workflows.

The change also brings productivity benefits. Companies that adopt LLM-based interfaces can transform data access latency (hours/days) into conversation latency (seconds). For example, if an analyst previously had to export CSV files, run transformations, and deploy slides, a language interface can “summarize the top five churn risk factors over the last quarter” and generate storytelling and visuals in one go. The human then examines, adjusts and acts, moving from the role of data plumber to that of decision maker. This is important: according to a survey conducted by McKinsey & Company63% of organizations using AI generation already create text output, and more than a third generate images or code. (While many are still in the early days of capturing enterprise-wide ROI, the signal is clear: language as an interface unlocks new value.

In architectural terms, this means that software design must evolve. MCP requires systems that publish capacity metadatasupport semantic routing, maintain contextual memory and apply guardrail. An API design no longer needs to ask “What function will the user call?” “, but rather “What intention can the user express? » A recently published framework to Improve Enterprise APIs for LLMs shows how APIs can be enriched with metadata adapted to natural language so that agents can select tools dynamically. The implication: software becomes modular around intention surfaces rather than function surfaces.

Language-focused systems also have risks and requirements. Natural language is ambiguous by nature, which is why businesses must implement authentication, logging, provenance, and access control, just as they did for APIs. Without these guardrails, an agent could call the wrong system, expose data, or misinterpret its intent. An article on “rapid collapse” highlights the danger: As the natural language user interface becomes dominant, the software can transform into “a conversationally accessible capability” and the business into “an API with a natural language interface.” This transformation is powerful, but safe only if systems are designed for introspection, auditing, and governance.

This change also has cultural and organizational ramifications. For decades, companies have hired integration engineers to design APIs and middleware. With MCP-based models, companies will hire more and more ontology engineers, capability architects And agent accreditation specialists. These roles focus on defining the semantics of business operations, mapping business entities to system capabilities, and maintaining contextual memory. With the interface now human-centric, skills such as domain knowledge, rapid framing, supervision and evaluation become central.

What should business leaders do today? First, think of natural language as an interface layer, not a fancy add-on. Map the workflows in your business that can be safely invoked through language. Then catalog the underlying capabilities you already have: data services, analytics, and APIs. Then ask: “Are they discoverable? Can they be called via an intent?” Finally, drive an MCP-style layer: create a small domain (customer support triage) where a user or agent can express the results in a language and let the systems do the orchestration. Then iterate and scale.

Natural language isn’t just the new front end. It becomes the default interface layer for software, replacing CLIs, then APIs, then SDKs. MCP is the abstraction that makes this possible. Benefits include faster integration, modular systems, higher productivity and new roles. For organizations that still need to manually call endpoints, this change will feel like relearning a new platform. The question is no longer “which function should I call?” » but “what do I want to do?”

Dhyey Mavani accelerates AI generation and computational mathematics.



Source link

اترك ردّاً

لن يتم نشر عنوان بريدك الإلكتروني. الحقول الإلزامية مشار إليها بـ *