Networking, compute, and storage form a sturdy triangle that supports the data center’s foundation, enabling data processing, storage, and management. Over the past decade, this triangle has witnessed remarkable advancements, driving innovation and efficiency in data centers.

Networking is the resilient apex, facilitating seamless connectivity within the data center and beyond. Breakthroughs in networking speeds, such as 10Gb/s, 25Gb/s, 40Gb/s, 100Gb/s, and now 200Gb/s and 400Gb/s, have accelerated data transfer speeds, ensuring swift and efficient data flow.

Compute is another corner of the triangle, embodying the processing power essential for data center operations. Multi-core processors and specialized accelerators have propelled compute performance, enabling the swift execution of diverse workloads.

Storage completes the trio, offering the capacity and capability to manage and protect data effectively. While the advent of SSDs has improved storage, delivering faster access speeds and enhanced durability, traditional storage designs have not been able to keep up with the tremendous advances of the other legs of the triangle.

Advancements in one part of the triangle may create imbalances or bottlenecks in others. For instance, rapid compute enhancements without commensurate networking or storage infrastructure upgrades may lead to congestion and reduced efficiency. Therefore, maintaining a harmonious integration of networking, compute, and storage is imperative for optimizing data center performance.

Significant improvements in the compute and networking technology parts of the triangle have been pivotal in propelling the AI revolution forward. In terms of networking, the transition from the 10GbE IEEE Standard in 2002 to the cutting-edge NVIDIA ConnectX-7 NIC 400GbE and NDR NVIDIA InfiniBand technologies in 2022 represents an astounding 80x bandwidth improvement over legacy systems. With full-duplex point-to-point links and speeds of 400 gigabits per second, these advancements have enabled unprecedented levels of data exchange and collaboration, laying the groundwork for large-scale AI deployments.

Over the past three decades, computing power has experienced an astonishing surge, and more recent advancements in compute and networking have democratized access to AI technologies, empowering researchers and practitioners to tackle increasingly complex problems and drive innovation across diverse domains.

However, antiquated legacy storage systems, such as those relying on the 30-year-old NFS protocol, present significant challenges for modern AI development. These systems struggle to fully utilize the bandwidth of modern networks, limiting the speed at which data can be transferred and processed. Moreover, they often face difficulties handling large volumes of small files, which can overwhelm storage metadata servers (MDS) and lead to performance bottlenecks, handicapping the tremendous advances in networking speed and compute power. These limitations hinder the scalability and efficiency of AI workflows, impeding tasks such as data preprocessing, model training, and inference.

This is where WEKA comes in. The WEKA® Data Platform is a subscription-based software solution purpose-built for large-scale AI and multicloud environments. Its advanced architecture delivers radical performance, leading to ease of use, simple scaling, and seamless data sharing so you can take full advantage of your enterprise AI workloads in virtually any location. WEKA brings the storage leg of the triangle up to par with the others.

Today, we announced that it is even easier to have a balanced infrastructure triangle for NVIDIA AI workflows with the unveiling of the WEKApod™—a certified data platform appliance designed to work seamlessly with the NVIDIA DGX SuperPOD™.

WEKApod is a turnkey data platform appliance that was purpose-built as a high-performance datastore for NVIDIA DGX SuperPOD. Each appliance consists of pre-configured storage nodes and software for simplified deployment and faster time to value.

A single, starting configuration WEKApod Data Platform Appliance can deliver up to 18,300,000 IOPs,providing high-speed storage for DGX SuperPOD. A one-petabyte WEKApod configuration starts with 8 storage nodes and scales up to hundreds. WEKApod uses the NVIDIA ConnectX-7 network card to drive 400 Gb/s network connections using InfiniBand with the DGX SuperPOD and integrates with NVIDIA Base Command™ Manager for observability from a single pane of glass that includes WEKA performance and other operational metrics.

The WEKApod appliance delivers all the benefits and features of the WEKA Data Platform in a validated, turnkey solution. WEKApod simplifies the WEKA experience with pre-configured appliances, including hardware and software, to streamline the sizing, configuration, ordering, and deployment process.

The WEKApod Data Platform Appliance certification for NVIDIA DGX SuperPOD provides just one of a range of WEKA Data Platform deployment options tailored to meet diverse organizational needs. Whether you opt for an on-premises setup, leverage cloud resources, or adopt a hybrid approach, WEKA ensures seamless integration. Moreover, our flexible data portability features facilitate effortless data movement, allowing businesses to easily adapt to evolving operational demands.

Customers who leverage the WEKApod Data Platform Appliance as part of their NVIDIA DGX SuperPOD solution can accelerate AI deployments by providing pre-configured environments tailored for AI applications, reducing deployment time and complexity. WEKA’s optimized AI-native architecture ensures fast access to data, minimizing idle time and speeding up computational tasks, leading to faster development of advanced AI solutions. Additionally, WEKA’s energy-efficient design lowers energy costs and carbon footprint by optimizing resource utilization and minimizing hardware footprint, further streamlining AI initiatives and reducing environmental impact.

The trio of networking, compute, and storage forms the bedrock of data center operations, with each component experiencing remarkable advancements over the past decade. Networking innovations have facilitated unprecedented data exchange speeds, which are crucial for large-scale AI deployments. Similarly, compute technology has seen a monumental leap with improvements over traditional CPU-based systems. However, legacy storage systems have posed significant challenges for modern AI development, hindering scalability and efficiency.

The AI-native WEKA Data Platform bridges this gap by bringing storage up to par with modern computing and networking. By harmonizing networking, compute, and storage, WEKA is streamlining AI initiatives to drive the next wave of innovation while minimizing their environmental impact.

The WEKApod Data Platform Appliance for NVIDIA DGX SuperPOD provides a turnkey solution that accelerates AI deployments, enhances GPU utilization, and reduces energy costs and carbon footprint.

Explore WEKApod