Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
This article is part of the special number of Venturebeat, “the real cost of AI: performance, efficiency and large -scale king”. Learn more of this special issue.
Over the past two decades, companies have had the choice between open-source and closed technologies.
The original choice for companies was mainly focused on operating systems, Linux offering an open source alternative to Microsoft Windows. In the developer’s area, open source languages like Python and JavaScript dominate, because open-source technologies, including Kubernetes, are norms in the cloud.
The same type of choice between Open and Closed is now confronted with companies for AI, with several options for both types of models. On the owner closed model front are some of the most important and most used models on the planet, including those of Openai and Anthropic. On the open-source side are models like Meta’s Llama, IBM granite,, Alibaba qwen And In depth.
Understanding when using an open or closed model is a critical choice for business AI decision-makers in 2025 and beyond. The choice has financial and personalization implications for the two options that companies must understand and consider.
There is no shortage of hyperbola around the old rivalry of the decades between open and closed licenses. But what does all this really mean for business users?
A closed source owner technology, such as the OPENAI GPT 4O for example, does not have a model code, training data or model weights open or available so that anyone can see it. The model is not easily available to be refined and, in general, it is only available for real use of companies with a cost (of course, the Chatppt has a free level, But it’s not going to cut it for a real corporate workload).
Open technology, such as Meta Llama, IBM Granite or Deepseek, has openly available code. Companies can freely use models, generally without restrictions, including fine setting and customization.
Rohan Gupta, director with Dowellytold VentureBeat that the debate open against the closed source is not unique or from AI, and it would probably not be resolved.
GUPTA explained that suppliers of closed sources generally offer several packaging around their model which allow ease of use, simplified scaling, more transparent improvements and demograds and a constant improvement flow. They also provide significant support from developers. This includes documentation as well as practical advice and often provides stricter integrations with infrastructure and applications. In exchange, a company pays a bonus for these services.
“Open source models, on the other hand, can provide higher control, flexibility and personalization options, and are supported by a vibrant and enthusiastic developer ecosystem,” said GUPTA. “These models are increasingly accessible via fully managed APIs between cloud suppliers, expanding their distribution.”
The question that many business users could ask is what is better: an open or closed model? However, the answer is not necessarily one or the other.
“We do not consider this as a binary choice,” David Guarrera, generative leader of AI at EY Americassaid Venturebeat. “Open vs closed is more and more a fluid design space, where the models are selected, or even automatically orchestrated, depending on the compromises between precision, latency, cost, interpretability and safety at different times of a workflow.”
Guarrera noted that closed models limit the depth of organizations optimize or adapt behavior. Suppliers of proprietary models often restrict the fine adjustment, invoice bonus rates or mask the process in black boxes. Although API -based tools simplify integration, they abstract a large part of the control, which makes it more difficult to create very specific or interpretable systems.
On the other hand, the open source models allow a fine targeted adjustment, a guardian design and optimization for specific use cases. This counts more in the agentic future, where models are no longer monolithic tools for general use, but interchangeable components in dynamic workflows. The capacity to finely shape the behavior of the model, at low cost and with total transparency, becomes a major competitive advantage during the deployment of agents specific to tasks or closely regulated solutions.
“In practice, we are planning an agency future where the selection of models is abstract,” said Guarrera.
For example, a user can write an e-mail with an AI tool, summarize legal documents with another, search for business documents with an open-source adjusted model and interact with AI locally via an LLM on the device, all without ever knowing what model does what.
“The real question becomes: which mixture of models best suits the specific requests for your workflow?” Said Guarrera.
With open models, the basic idea is that the model is freely available for use. While on the other hand, companies are still paying closed models.
Reality with regard to taking into account the total cost of possession (TCO) is more nuanced.
Praveen Akkiraju, Managing Director of Insight Partners Explained in Venturebeat that TCO has many different layers. Some key considerations include infrastructure accommodation costs and engineering: are open source models self-hosted by the company or the cloud supplier? How much engineering, including fine adjustment, childcare balustrades and safety tests, is necessary to operationalize the model safe?
Akkiraju noted that The fine adjustment of an open weight model can also sometimes be a very complex task. Closed border model companies have enormous engineering efforts to guarantee performance on several tasks. In his opinion, unless companies deploy similar engineering expertise, they will face a complex balancing act when they have refined open source models. This creates cost implications when organizations choose their model deployment strategy. For example, companies can refine several versions of the model for different tasks or use an API for several tasks.
Ryan Gross, data manager and applications at Cloud Native Services Provider Caylete told Venturebeat that, from his point of view, the license conditions do not matter, except in the scenarios of on -board cases. The most important restrictions often relate to the availability of the model when data residence requirements are in place. In this case, the deployment of an open model on infrastructure like Amazon Sagemaker can be the only way to obtain a advanced model that always complies. Regarding TCO, Gross noted that the compromise is between costs by altitude and accommodation and maintenance costs.
“There is a clear balance point where the economy goes from closed models to cheaper open models,” said Gross.
In his opinion, for most organizations, the closed models, with accommodation and scaling resolved in the name of the organization, will have a lower TCO. However, for large companies, SaaS companies with very high demand on their LLMS, but simpler use cases requiring border performance or IA -centered products companies, accommodation of distilled open models can be more profitable.
Josh bosquez, cto to Second front systems Among the many companies that had to consider and assess the VS closed models.
“We use open and closed AI models, depending on the case of specific use, security requirements and strategic objectives,” Bosquez in Venturebeat told Bosque.
Bosquez explained that open models allow its business to integrate advanced capacities without the time or cost of training models from zero. For internal experimentation or rapid prototyping, open models help their business quickly and benefit from community progress.
“The closed models, on the other hand, are our choice when data sovereignty, management and company security guarantees are essential, in particular for applications or deployments intended for customers involving sensitive or regulated environments,” he said. “These models often come from trust suppliers, which offer solid performance, support for compliance and self-hosting options.”
Bosquez said that the model selection process is interfunctional and risks on risks, evaluating not only technical adjustment, but also data processing policies, integration requirements and long -term scalability.
By looking at TCO, he said that he varies considerably between the open and closed models and no approach is universally cheaper.
“It depends on the scope of deployment and organizational maturity,” said Bosquez. “In the end, we assess TCO not only on the dollars spent, but for the delivery speed, the risk of conformity and the ability to secure in complete safety.”
For decision -makers of intelligent technology evaluating investments in AI in 2025, the VS closed debate does not concern the picking of the sides. It is a question of creating a strategic portfolio approach which optimizes for different cases of use within your organization.
Immediate elements of action are simple. First of all, check your current IA workloads and map them against the decision-making frame described by the experts, taking into account precision requirements, latency needs, cost constraints, security requests and compliance obligations for each use case. Second, honestly assess the engineering capacity of your organization for the adjustment model, accommodation and maintenance of your model, as this directly affects your real total cost of possession.
Third, start to experiment with model orchestration platforms that can automatically transport tasks to the most appropriate model, whether open or closed. This positions your organization for the agental future that industry leaders, such as Guarrera d’Ey, predict, where the selection of models becomes invisible for end users.