Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Inside Intuit’s GenOS update: Why prompt optimization and intelligent data cognition are critical to enterprise agentic AI success


Join our daily and weekly newsletters for the latest updates and the exclusive content on AI coverage. Learn more


Corporate AI teams are faced with an expensive dilemma: building sophisticated agent systems that lock them in specific providers of large language model (LLM), or constantly rewrite guests and data pipelines when switching between models. Financial technology giant Intuity Solved this problem with a breakthrough which could reshape the way in which organizations address the architectures of Multimodel.

Like many companies, Intuit has built generative solutions fed by AI using several large models of language (LLM). In recent years, the intuits them Operating system generating AI (genes) The platform has gradually advanced, providing advanced capacities to developers and end users of the company, such as Intuit assist. The company has become more and more focused on agental workflows This had a measurable impact on users of intuit products, which include QuickBooks, Credit Karma and Turbotax.

Intuits now extends Genos with a series of updates that aim to improve the productivity and overall efficiency of AI. Improvements include an agent starter kit which has enabled 900 internal developers to build hundreds of AI agents within five weeks. The company also launches what it calls a “layer of intelligent data cognition” which exceeds traditional recovery generation approaches.

Perhaps even more impactful is that Intoline has solved one of the most thorny problems in the company AI: how to build agent systems that work transparently on several large language models without forcing developers to rewrite prompts for each model.

“The key problem is that when you write an prompt for a model, the model A, you tend to think about how the A model is optimized, how it has been built and what you need to do and when you have to go to model B,” said Ashok Srivastava, data director, told Venturebeat. “The question is: should you rewrite it? And in the past, it should be rewritten.”

How genetic algorithms eliminate supplier locking and reduce the operational costs of AI

Organizations have found several ways to use different LLMs in production. An approach is to use an LLM form Model routing Technology, which uses a smaller LLM to determine Where to send a request.

The Optimization Service invites induced adopts a different approach. It is not necessarily a question of finding the best model for a request, but rather of optimizing an invitation for a certain number of different LLMs. The system uses genetic algorithms to automatically create and test prompt variants.

“The way the invited translation service works is that it has genetic algorithms in its component, and these genetic algorithms in fact create variants of the prompt, then make an internal optimization,” said Srivastava. “They start with a basic set, they create a variant, they test the variant, if this variant is really effective, then it says, I will create this new base and then it continues to optimize.”

This approach offers immediate operational advantages beyond convenience. The system provides automatic switch capacities for companies concerned with supplier locking or reliability of the service.

“If you use a certain model, and for any reason this model falls, we can translate it so that we can use a new model that could really be operational,” noted Srivastava.

Beyond the cloth: smart data cognition for corporate data

While the rapid optimization resolves the portability challenge of the model, the engineers of intuits identified another neck of critical strangulation: the time and the expertise required to integrate AI with complex corporate data architectures.

Intuit has developed what he calls a “smart data cognition layer” which takes up more sophisticated data integration challenges. The approach goes far beyond the simple recovery of documents and the increased generation (RAG).

For example, if an organization obtains a set of data from a third party with a certain specific scheme that the organization is not aware, the cognitive layer can help. He noted that the cognitive layer includes the original diagram as well as the target diagram and how to map them.

This capacity addresses corporate scenarios of the real world where the data comes from several sources with different structures. The system can automatically determine the context that the correspondence of the simple diagram would lack.

Beyond the AI ​​generation, how the “super model” of intuits helps to improve forecasts and recommendations

The intelligent data cognition layer allows sophisticated data integration, but the competitive advantage of intuits extends beyond generative AI to the way in which these capacities combine with a proven predictive analysis.

The company exploits what it calls a “super model” – an overall system that combines several prediction models and in -depth learning approaches for forecasting, as well as sophisticated recommendation engines.

Srivastava explained that the model is a supervision model that examines all the underlying recommendation systems. He examines to what extent these recommendations have worked in experiences and in the field and, on the basis of all this data, adopts an overall approach to make the final recommendation. This hybrid approach allows predictive capacities that systems based on LLM Purs cannot correspond.

The combination of agentic AI with predictions will help organizations examine the future and see what could happen, for example, with a problem related to cash flows. The agent could then suggest changes that can be made now with user authorization to help prevent future problems.

Implications for corporate AI strategy

The Intuit approach offers several strategic lessons for companies that seek to direct the adoption of AI.

First, investing in LLM agnostic architectures from the start can provide significant operational flexibility and risk attenuation. The approach to genetic algorithm for rapid optimization could be particularly precious for companies that operate on several cloud suppliers or those who are concerned about the availability of the model.

Second, the emphasis on the combination of traditional AI capabilities with a generative AI suggests that companies should not abandon existing prediction and recommendation systems when building agent architecture. Instead, they should seek means to integrate these capacities into more sophisticated reasoning systems.

This news means that the agents’ sophisticated implementation bar is underway for companies adopting AI later in the cycle. Organizations must think beyond simple chatbots or document recovery systems to stay competitive, focusing on multi-agent architectures that can manage complex commercial workflows and predictive analyzes.

The main point to remember for technical decision -makers is that the successful implementations of corporate AI require sophisticated infrastructure investments, not only API calls for foundation models. Genos of intuits demonstrate that the competitive advantage comes from how organizations can integrate AI capabilities into their existing data and business processes.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *