Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more
Walmart isn’t buying enterprise AI solutions — they’re creating them in their internal AI foundry. The retailer’s Element platform is capable of creating AI applications at a pace that renders traditional software development obsolete. With 1.5 million associates now using AI tools built on Element, Walmart has solved the build-versus-buy dilemma by creating something entirely different.
Walmart designed Element with scale in mind first, and it shows. The platform powers applications handling 3 million daily queries from 900,000 weekly users. It already supports real-time translation across 44 languages, reducing shift planning time from 90 to 30 minutes. But these applications are leading indicators of a larger, more fundamentally powerful transformation: Walmart has industrialized AI development.
“We have built Element in a way where it makes it agnostic to different large language models (LLMs),” Parvez Musani, Walmart’s SVP of stores and online pickup and delivery technology, told VentureBeat in a recent interview. “For the use case or the query type that we are after, Element allows us to pick the best LLM out there in the most cost-effective manner.”
In defining its platform, Walmart is beholden to no one and can quickly integrate the latest LLMs to maintain its competitive advantage. Inherent in the design decision to seek platform independence is also a strong commitment to open source, which is baked into Element’s integration options and structure.
Element’s initial production run validates the foundry model. As Musani explains: “The vision with Element always has been, how do we have a tool that allows data scientists and engineers to fast track the development of AI models?”
Five applications were manufactured on the same platform:
Shared infrastructure eliminates redundant development, and unified data pipelines connect the supply chain to the store floor. As Musani explains, Element is LLM agnostic. “So for the use case or the query type that we are after, Element allows us to pick the best LLM in the most cost-effective manner.”
Standardized deployment patterns accelerate time to production, and built-in feedback loops ensure continuous improvement. Brooks Forrest, VP of associate tools at Walmart, emphasized: “Our associates are constantly giving us feedback, allowing us to iterate and be agile in delivering capabilities for them.” Forrest continued, “At our scale, with over a million associates across 4,000-plus stores, it’s really important to have simplicity for associates and provide them these tools.”
The foundry doesn’t build applications; it manufactures them with the same production line, quality controls and operational patterns. Each application strengthens the platform’s capabilities for the next build.
Traditional enterprise AI treats each application as a unique project. Element treats them as products rolling off an assembly line. The difference determines whether AI deployment takes quarters or weeks. When asked about velocity, Musani confirmed: “We want agility, and that is what Element will continue to iterate and create new features on.”
The pattern is proven. Data scientists submit specifications, Element handles model selection, infrastructure, scaling and deployment. New applications inherit battle-tested components from previous builds, with development friction approaching zero. The factory accelerates with each production run.
Traditional enterprise AI deployment follows a predictable pattern. Companies identify a use case, evaluate vendors, negotiate contracts and implement solutions. Each new application repeats this cycle.
Walmart’s Element platform has been designed to handle multiple app and product development requests concurrently with minimal waste, much like a factory that has achieved lean manufacturing performance levels. Data scientists and engineers submit requirements. The foundry handles model selection, infrastructure provisioning, scaling and deployment.
The result is that apps move quickly through development and deliver value to associates in a fraction of the time it would take to build without Element as their foundation. The shift planning tool that saves managers an hour per day? Built on Element. The conversational AI handling associate questions? Element. The AR-powered inventory system? Element again.
The foundry model explains why Walmart can deploy at scale while others pilot. When infrastructure, data pipelines and model management exist as manufacturing capabilities rather than project requirements, the only limiting factor becomes idea generation and validation.
Musani revealed that Element doesn’t just connect to supply chain systems. It transforms operational data into development resources. When trailers arrive at distribution centers, that data flows through Element. Customer shopping patterns feed the same pipelines. Associate feedback creates training datasets.
One of the most surprising benefits of the initial foundry run is the power of the wealth of supply chain data Walmart has, says Musani. Element has been designed to leverage a multitude of data sources to fuel rapid application development. The AI task management system is aware of when trucks arrive because Element provides unified access to logistics data. It prioritizes tasks based on customer behavior because Element standardizes retail analytics. It adapts to local conditions because Element enables distributed model deployment.
The architecture treats Walmart’s operational complexity as an advantage rather than a challenge. Each of the 4,000 stores in the U.S. generates unique data patterns. Element’s foundry model allows teams to build applications that leverage these differences rather than averaging them away.
Element’s LLM-agnostic architecture enables an unprecedented level of flexibility in deploying enterprise AI. Walmart runs continuous cost-performance arbitrage across AI providers, comparing everything from simple queries routing to basic models. Abritrage examines how complex problems drive premium services. The routing happens automatically based on real-time evaluation.
“Element allows us to pick the best LLM out there in the most cost-effective manner, and also the one that is going to give us the best answer that we are looking for,” said Musani. This capability transforms AI from a fixed cost to a dynamic optimization problem.
The implications extend beyond cost savings. When new models emerge, Walmart can test them immediately without architectural changes. As existing models improve, benefits are automatically extended to all Element-built applications. When prices change, the platform adjusts routing strategies.
This flexibility proved crucial for the translation tool supporting 44 languages. Different language pairs require different model capabilities. Element selects the optimal model for each translation request, balancing accuracy requirements against computational costs.
Walmart’s approach to feedback loops is key to their advanced foundry. Associates don’t just use applications built on Element; they continuously improve them through structured interaction patterns.
To achieve this, the conversational AI system processes 30,000 daily queries. Each interaction generates signals about model performance, query patterns and user satisfaction. Element captures these signals and feeds them back into the development process. New applications learn from existing deployments before launch.
The technical implementation of creating a feedback loop that can scale requires sophisticated data pipelines, model versioning systems and deployment orchestration that traditional enterprises struggle to build for single applications.
The Element Foundry model challenges conventional wisdom around enterprise AI deployment. Instead of using vendor expertise, Walmart built capabilities that vendors can’t match. The reasons are structural, not technical.
External platforms optimize for generalization. They build features that work across industries, companies and use cases. This breadth requires compromise. Walmart’s Element optimizes for one customer with particular, unique needs. The 2.1 million associates worldwide share common workflows, terminology and objectives that no external platform can fully address.
The foundry model also changes innovation cycles. When Walmart identifies a new use case, development starts immediately: No vendor evaluation, contract negotiation or integration planning. The idea moves directly from conception to production using existing foundry capabilities.
Walmart’s Element Foundry creates competitive advantages that compound over time. Each new application strengthens the platform, each user interaction improves model selection and each deployment teaches the foundry about production requirements.
Each of Walmart’s competitors faces an uncomfortable choice in the race to deliver AI-enabled apps and tools to their sales associates, channels and partners. Building similar capabilities requires a massive investment and technical expertise. Buying solutions means accepting vendor limitations and slower innovation cycles. Waiting means falling further behind as Walmart’s foundry accelerates.
The retail context and the industry’s rapid pace, including the need for speed to stay financially competitive, amplify these advantages. With thin margins and intense competition, operational improvements have a direct impact on profitability. The shift planning tool saving 60 minutes per manager per day translates to millions in labor cost savings. Multiply this across dozens of Element-built applications, and the financial impact becomes strategic.
Walmart’s Element provides a blueprint for enterprise AI transformation that fundamentally redefines deployment strategy. After decades covering enterprise technology transformations, from ERP to cloud migrations, I’ve rarely seen an approach this transformative.
Four principles define the Element architecture:
First, treat AI models as interchangeable components. Element being LLM agnostic prevents the vendor lock-in that has plagued enterprise software, while enabling continuous optimization.
Second, unify data access before building applications. Musani’s insight: “There’s world knowledge through LLMs, and there’s corporate Walmart knowledge. Element brings these together, creating tooling that accesses data from both sides of the equation.” This integration with supply chain, customer and operational systems creates the foundation for AI development.
Third, industrialize the development process. Element’s foundry model turns AI application creation into a repeatable, scalable manufacturing process. “We needed a tool that allows data scientists and engineers to fast-track AI model development,” Musani noted.
Fourth, design for feedback from inception. Built-in feedback loops ensure applications improve through use, creating what Musani called “transformational, not incremental impact.”
Walmart just solved enterprise AI’s most complex problem: Scale. Instead of buying or building individual AI tools, they created Element. Think Toyota’s production system, but for AI.
The real insight isn’t the technology, it’s the mindset shift. Walmart treats AI development like manufacturing: standardized processes, modular components and continuous refinement. Each associate interaction makes the system smarter; each deployment teaches the next.
For enterprise leaders watching their AI pilots struggle to scale, Element offers a crucial lesson. Success isn’t about choosing the right model or vendor, it’s about building the organizational capability to turn AI potential into a consistent operational reality at scale.
Walmart has demonstrated what’s possible when enterprises stop thinking of AI as software to install, and start thinking of it as a capability to create. The enterprises that understand this distinction will define the next decade.