DDN has unveiled Infinia 2.0, a data platform that consolidates AI workloads across data centers and multi-cloud environments. The new release focuses on unifying AI data workflows, improving GPU utilization, and reducing costs through a scalable software-defined architecture.

“Infinia 2.0 represents a paradigm shift in how enterprises and cloud providers gain value from AI while safely managing and optimizing AI data workloads,” said Alex Bouzari, CEO at DDN. “With Infinia, we accelerate customers’ data analytics and AI frameworks, delivering faster model training and real-time insights while also improving GPU efficiency and reducing power usage.”

Streamlined AI Workflows
Infinia 2.0 introduces real-time AI data pipelines and event-driven data movement to automate workflows, minimizing latency and enabling rapid insights. The platform’s secure, multi-tenant environment supports a range of workloads, from transactional databases to stateful containers, all within a single architecture.

“Whether you’re a CXO looking to kickstart AI initiatives or a data scientist seeking to supercharge applications, Infinia offers a highly performant, scalable data fabric,” added Paul Bloch, Co-founder and President of DDN. “Our platform has already been proven in some of the world’s largest AI factories and cloud environments.”

Unified Data Management
Infinia’s Data Ocean feature provides a global view of distributed datasets, simplifying data preparation and analytics while supporting multiple data protocols. This integration reduces complexity, allowing structured and unstructured data to be managed without duplication.

“Data is the lifeblood of AI, which is why enterprises need integrated, full-stack systems and software to drive modern applications,” said Charlie Boyle, vice president of DGX platforms at NVIDIA. “Platforms like DDN Infinia 2.0 provide businesses the infrastructure they need to put their data to work.”

Cost and Performance Improvements
DDN highlights Infinia 2.0’s ability to reduce power and cooling requirements in data centers by up to 10x. The platform also enables high-density deployments, handling up to 100 petabytes in a single rack, while delivering 100x faster metadata processing and a 10x gain in cost efficiency compared to open-source data frameworks.

“AI workloads require real-time data intelligence that eliminates bottlenecks and accelerates workflows,” said Sven Oehme, CTO at DDN. “Infinia 2.0 was designed to maximize AI value, delivering real-time data services, highly efficient multi-tenancy, and an AI-native architecture.”

Future-Ready AI Data Platform
DDN describes Infinia 2.0 as prepared for next-generation AI applications, such as generative AI and retrieval-augmented generation (RAG). The platform is intended to simplify complex AI workflows, enabling enterprises to scale operations, enhance security, and reduce costs, all while integrating seamlessly with existing IT infrastructures.