VAST Data has announced a major collaboration with Microsoft to bring its high-performance AI Operating System (AI OS) to Azure, giving enterprises a streamlined way to deploy next-generation, scalable AI infrastructure directly in the cloud. The announcement was made at Microsoft Ignite, underscoring both companies’ shared ambition to accelerate the development of agentic AI systems.
The partnership enables Azure customers to access VAST’s full suite of AI-native data services—including unified storage, data cataloging, metadata-optimized databases, and intelligent data engines—integrated natively into Azure’s cloud ecosystem. With this, organizations can manage data and AI pipelines consistently across on-premises, hybrid, and multi-cloud environments while benefiting from Azure’s global scale, governance frameworks, and security infrastructure.
A Unified Platform for Agentic AI
Under the collaboration, the VAST AI OS will run directly on Azure compute, providing high throughput, predictable performance, and unified management tools for demanding AI workloads. VAST’s InsightEngine and AgentEngine will enable Azure customers to run intelligent data-driven workflows and autonomous agents directly where data resides—supporting vector search, RAG pipelines, real-time reasoning, and large-scale inference.
Jeff Denworth, Co-Founder of VAST Data, described the milestone as a convergence of scale, performance, and simplicity:
“This collaboration with Microsoft reflects our shared vision for the future of AI infrastructure, where performance, scale, and simplicity converge to enable enterprises to transform their business with agentic AI.”
Optimized for High-Performance Model Training
The VAST AI OS is optimized for the most demanding model-building environments. Azure GPU and CPU clusters—enhanced by the Laos VM Series and Azure Boost accelerated networking—are kept fully utilized through intelligent caching and fast metadata handling. This ensures smooth scaling from initial pilots to multi-region deployments.
Aung Oo, Vice President of Azure Storage at Microsoft, noted:
“Many AI model builders leverage VAST for its scalability and AI-native capabilities. This collaboration helps mutual customers streamline operations and accelerate time-to-insight for AI workloads of every size.”
Hybrid AI Without Friction
A key feature of the platform is VAST’s exabyte-scale DataSpace, which creates a global namespace to eliminate data silos. This allows customers to burst GPU workloads into Azure seamlessly without data migration or configuration changes—an important capability for enterprises transitioning from on-prem to hybrid AI architectures.
VAST also supports file, object, and block storage interfaces, and its VAST DataBase enables transactional, analytical, and AI workloads to run on a unified platform.
Built for Cost-Efficient Scale
The collaboration leverages VAST’s disaggregated, shared-everything (DASE) architecture, enabling independent scaling of compute and storage inside Azure. With built-in Similarity Reduction to minimize storage footprint, enterprises can operate large-scale AI infrastructure more cost-efficiently.
A Strategic Role in Microsoft’s AI Roadmap
Microsoft continues to make major investments in AI infrastructure—including custom silicon—and VAST will work closely with the Azure team to align on next-generation hardware and system requirements. The collaboration positions VAST as a key component of Microsoft’s broader AI computing strategy, ensuring future AI systems can operate at global scale with unified data and compute architectures.
Upcoming Joint Engagements
- Microsoft Ignite (San Francisco): VAST CEO Renen Hallak will participate in customer briefings on scaling agentic AI with Azure.
- Supercomputing 2025 (St. Louis): VAST will host Andrew Jones, Engineering Leader for Future Supercomputing & AI Capabilities, to discuss the future of AI cloud infrastructure. Both companies will also showcase demos and technical sessions on-site.
The collaboration marks a significant step toward delivering cloud-native, enterprise-ready AI systems capable of supporting the next wave of AI innovation—from autonomous agents to large-scale model training and globally distributed inferencing.
