Next Generation WEKApod Shatters AI Storage Economics

CIOs are under pressure to deliver ROI on AI infrastructure. But legacy storage forces an impossible choice: extreme performance or manageable costs. Traditional storage metrics like $/GB or GB/s don’t capture what matters for AI workloads—ensuring storage never bottlenecks expensive GPUs while keeping costs under control.
This summer, WEKA solved the performance challenge with NeuralMesh—an intelligent, adaptive storage system that delivers microsecond latency at scale and NeuralMesh Axon, which fuses directly into GPU servers to deliver ultra-low latency storage without separate infrastructure. We broke the memory wall for AI inference with Augmented Memory Grid, extending GPU memory with petabytes of persistent storage at near-memory speeds to dramatically reduce time to first token and lower inference costs.
Today, WEKA announces next-generation WEKApod appliances that shatter AI storage economics. WEKApod Prime delivers 65% better price-performance and 4.6x better capacity density, while WEKApod Nitro provides 2x faster performance with 2x better performance density. By packing dramatically more capability into less space while consuming significantly less power, both configurations directly address the datacenter resource constraints organizations face today.
The Breakthrough:
- Eliminates the AI storage trade-off: Extreme performance OR manageable costs—choose both
- 65% better price-performance with intelligent data placement
- 4.6x better capacity density – up to 55 PB in a 42U rack
- 5x better write IOPS per rack unit compared to previous generation
- 4x better power density – 23,000 IOPS per kW (or 1.6 PB per kW)
- 68% less power per TB while maintaining performance AI workloads demand
What is WEKApod?
WEKApod is the easiest way to deploy and scale NeuralMesh by WEKA—and customers love it for exactly that reason. These pre-validated appliance configurations provide turnkey building blocks for rapid deployment, eliminating the time and effort typically required to architect, procure, and integrate high-performance storage infrastructure.
Built on industry-standard hardware, WEKApod delivers WEKA’s software-defined flexibility in a ready-to-deploy package. There’s no proprietary hardware lock-in—just validated configurations that work out of the box. Organizations start with as few as 8 servers and scale to hundreds as needs grow, getting to production faster without complex integration work.
The WEKApod family serves diverse infrastructure needs. WEKApod Prime is optimized for price-performance and capacity density—ideal for enterprises running mixed training and inference workloads and research institutions balancing performance needs across multiple teams. WEKApod Nitro is purpose-built for extreme performance, designed for AI clouds offering GPU-as-a-service, AI factories running real-time inference at scale, and providers building the next generation foundation models.
Now, with dramatic density improvements across the entire product line, WEKApod delivers even more value: the easiest path to deploying NeuralMesh just became the most economical.
Breakthrough Price-Performance
WEKA is the undisputed leader in AI storage performance. But proving ROI means matching infrastructure to workload needs—not every AI application requires absolute maximum performance, and overprovisioning wastes budget.
The completely redesigned WEKApod Prime appliance achieves 65% better price-performance through AlloyFlash—a new capability in NeuralMesh that intelligently places data across high-performance TLC and high-capacity eTLC drives in the same system. Organizations get the performance AI workloads demand at economics that make sense.
Unlike traditional tiered storage with cache hierarchies and latency penalties that can throttle write performance, AlloyFlash optimizes data placement based on workload characteristics, ensuring writes maintain full performance while delivering breakthrough economics. WEKApod delivers consistent, no-compromise performance across all workload types.
The result: WEKA now provides the right level of performance and economics for every AI workload. AI clouds can serve diverse customer needs with configurations purpose-built for each profile—from performance-intensive training to cost-optimized inference. Enterprises can deploy AI infrastructure that delivers the performance they need at economics that prove ROI.
Exceptional Performance Density
As AI infrastructure scales, datacenter resources have become the constraint. Limited rack space, power, and cooling capacity drive costs higher—making density an economic imperative, not just an achievement.
The next generation WEKApod answers this challenge across the entire product line. WEKApod Prime delivers 4.6x better capacity density while consuming 68% less power per TB. For write-intensive AI workloads, Prime delivers up to 5x better write IOPS per rack unit compared to the previous generation, ensuring storage keeps pace with demanding training and checkpointing operations. By packing dramatically more capability into less space while consuming significantly less power, both Prime and Nitro configurations directly address the datacenter resource constraints organizations face today. The result: dramatically lower total cost of ownership through reduced footprint, power consumption, and cooling requirements.
Industry Leading Performance
For AI factories running hundreds or thousands of GPUs, WEKApod Nitro ensures infrastructure keeps pace with next-generation accelerators. WEKApod Nitro delivers 800 Gb/s with ConnectX-8 / XDR-ready networking—2x faster performance and 2x better performance density than the previous generation—ensuring the extreme performance AI factories demand.
This means faster training times, higher throughput for inference, and full utilization of expensive GPU investments. Whether operating a large-scale AI factory or delivering GPU-as-a-service, WEKApod Nitro ensures storage never limits scale.
Turnkey Simplicity at Scale
All this performance, density, and flexibility comes with the simplicity of a turnkey appliance. The storage industry has long suggested that extreme performance requires complex deployment and specialized expertise. WEKApod changes the equation.
The next generation WEKApod eliminates the friction associated with deploying and scaling AI storage. A fast, easy setup utility simplifies day-one installation, and an overall plug-and-play experience accelerates scaling WEKApod deployments. As the easiest way to deploy and scale NeuralMesh, we’re delivering enterprise-grade AI storage accessible to organizations without deep storage engineering expertise. WEKApod maintains NVIDIA DGX SuperPOD and NVIDIA Cloud Partner (NCP) certification, ensuring seamless integration with reference architectures.
The complete NeuralMesh enterprise feature set comes standard—distributed data protection, instant snapshots, multi-protocol access (POSIX, NFS, SMB, S3, GPUDirect Storage), automated tiering, encryption, and hybrid cloud capabilities. IT teams get the storage performance AI demands without the operational overhead or heavy lifting.
Built for AI at Scale
The next generation WEKApod transforms AI storage: uncompromising performance, breakthrough cost-efficiency, and operational simplicity—all in one validated package. Now, organizations get to production faster, scale without budget constraints, and prove infrastructure ROI from day one.
AI clouds get the performance density and deployment speed to scale competitively while serving diverse customer needs with optimal economics for each workload profile.
Large AI deployments training on thousands of GPUs get storage that keeps pace with next-generation accelerators without budget constraints.
Enterprises deploying AI infrastructure get breakthrough cost-efficiency without compromising performance or simplicity—the easiest way to deploy and scale NeuralMesh by WEKA.