Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

IBM sees enterprise customers are using ‘everything’ when it comes to AI, the challenge is matching the LLM to the right use case


Join the event that trusts business leaders for almost two decades. VB Transform brings together people who build a real business AI strategy. Learn more


In the past 100 years,, IBm has seen many different technological trends increase and descend. What tends to win are technologies where there is a choice.

HAS VB Transform 2025 Today, Armand Ruiz, vice-president of the AI ​​platform at IBM, has detailed the way Big Blue thinks of a generative AI and how its business users really deploy technology. A key theme that Ruiz underlined is that at this stage, it is not a question of choosing a single supplier or a large language model technology (LLM). Increasingly, corporate customers systematically reject AI strategies with a single seller in favor of multimodal approaches that correspond to LLM specific to targeted use cases.

IBM has its own open source AI models with the Granite familyBut it is not to position this technology as the only choice, or even the right choice for all workloads. This corporate behavior pushes IBM to position itself not as a competitor of foundation model, but as Ruiz called a control tower for the workloads of the AI.

“When I sit in front of a client, they use everything they have access, everything,” said Ruiz. “For coding, they like Anthropic and for certain other use cases as for reasoning, they like O3, then for LLM customization, with their own data and a fine adjustment, they like our series of granite or granite or granite or granite series Mistral with their small models, or even Lama… It just corresponds to the LLM in the right case of use. And then we also help them make recommendations. »»

Multi-LLM gateway strategy

IBM’s response to this reality of the market is a newly published model gateway which provides companies with a single API to switch between different LLMs while maintaining observability and governance in all deployments.

Technical architecture allows customers to execute open source models on their own inference battery for sensitive use cases while simultaneously accessing public APIs such as the AWS foundation or Google Cloud Gemini for less critical applications.

“This bridge provides our customers with a single layer with a single API to go from one LLM to another LLM and add observability and governance throughout,” said Ruiz.

The approach directly contradicts the common strategy of the customer locking supplier in proprietary ecosystems. IBM is not the only one to adopt a multi-supplier approach to the selection of models. Several tools have emerged in recent months for Model routingwhich aims to carry out workloads on the appropriate model.

Emerging agent orchestration protocols as a critical infrastructure

Beyond multimodelle management, IBM tackles the emerging challenge of agent communication to agent via open protocols.

The company has developed ACP (Communication Protocol) and contributed it to the Linux Foundation. ACP is a competitive effort for Google agent2age (A2A) Protocol that was brought this week by Google to the Linux Foundation.

Ruiz noted that the two protocols aim to facilitate communication between agents and reduce personalized development work. It expects that ultimately the different approaches converge, and currently, the differences between A2A and ACP are mainly technical.

Agent orchestration protocols provide standardized AI systems to interact on different platforms and suppliers.

The technical meaning becomes clear when examining the company scale: some IBM customers already have more than 100 agents in pilot programs. Without standardized communication protocols, each agent-agent interaction requires personalized development, creating an unsustainable integration burden.

AI consists in transforming workflows and the way in which the work is done

Regarding the way Ruiz sees AI impacting businesses today, he suggests that it really must be more than chatbots.

“If you just make chatbots, or if you only try to save money with AI, you don’t do AI,” said Ruiz. “I think AI really consists in completely transforming the workflow and the way the work is over.”

The distinction between the implementation of AI and the transformation of AI is focused on the depth of technology which is integrated into existing commercial processes. The example of IBM internal HR illustrates this change: instead of employees to request chatbots for HR information, specialized agents now manage routine requests on remuneration, hiring and promotions, automatically purchasing to appropriate systems and including humans only when necessary.

“I spent a lot of time talking to my HR partners for many things. Most of them manage with an HR agent, ”said Ruiz. “Depending on the question, if it is a remuneration or simple management of separation, hiring someone, or promoting, all these things will connect with different HR internal systems, and these will be like separate agents.”

This represents a fundamental architectural passage of models of interaction of human computers to the automation of mediated computer workflow. Rather than employees learning to interact with AI tools, AI learns to execute complete end -to -end business processes.

Technical involvement: Companies must go beyond API integrations and rapid engineering to deep process instrumentation which allows AI agents to execute workflows in several stages.

Strategic implications for business AI investment

IBM’s real world deployment data suggest several critical changes for business AI strategy:

Abandonment chatbot-tirst thinking: Organizations must identify complete workflows for transformation rather than adding conversational interfaces to existing systems. The objective is to eliminate human stages, not to improve human-computer interaction.

Architect for multi-model flexibility: Rather than engaging in unique AI providers, companies need integration platforms that allow you to switch between models according to use cases while maintaining governance standards.

Invest in communication standards: Organizations must prioritize AI tools that support emerging protocols such as MCP, ACP and A2A rather than proprietary integration approaches that create the locking of suppliers.

“There is so much to build, and I continue to say that everyone must learn AI and in particular business leaders must be the first AI leaders and understand the concepts,” said Ruiz.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *