Posted inEmergent Tech

Pure Storage to accelerate enterprise AI adoption with planned NVIDIA DGX SuperPOD certification

Pure Storage is among the first enterprise storage ventures to work with NVIDIA for a NVIDIA DGX BasePOD certification and the NVIDIA OVX validation. This matters because the certification process ensures the architecture meets rigorous performance and reliability standards.

This gives customers confidence in the infrastructure’s ability to handle demanding AI workloads, ensuring optimal performance and accelerating AI adoption. With a certified architecture, customers can trust that their AI deployments will run smoothly and efficiently, allowing them to focus on innovation and development.

“Enterprises adopting AI need storage performance and flexibility as they architect their infrastructure to address AI workloads at scale. Pure Storage’s certification with the NVIDIA DGX and OVX platforms helps deliver highly-performant solutions for customers at every stage of their AI journey,” said Charlie Boyle, Vice President of DGX Platform, NVIDIA.

A notable example is the collaboration between NVIDIA and Pure Storage on the OVX architecture. This architecture is built for RTX-accelerated AI and graphics performance, making it ideal for tasks such as image processing, graphics, and RTX processing. The OVX architecture provides a certification reference architecture specifically for these applications, ensuring optimised performance and seamless integration.

Building upon this pioneering collaboration for AI-ready infrastructure, Pure expects to be a certified storage solution for NVIDIA DGX SuperPOD by the end of 2024.

“End-to-end reference architecture simplifies the process for customers by providing ready-made infrastructures, eliminating the need for them to construct and build complex architectures to deploy AI. This ready infrastructure accelerates AI adoption by reducing setup time and complexity. Customers benefit from having pre-configured, optimised solutions that allow them to focus on developing and deploying AI applications without worrying about the underlying infrastructure,” added Omar Akar, Regional VP for CEE and META at Pure Storage.

The rack-scale AI infrastructure developed by NVIDIA is designed to process trillions of parameters for training and inference. It is highly suitable for complex Generative AI models that require massive processing capabilities. The architecture can handle the demands of new complex Generative AI models, making it a powerful solution for AI research and development.