WEKA's Award-Winning Storage for AI Unleashes Performance on Oracle Cloud Infrastructure

Modern AI workloads create intense demands on data infrastructure. The volume and velocity of data required to keep large-scale GPU clusters saturated have exposed the limitations of traditional storage architectures.

We are proud to announce that Oracle has recognized our work in solving this challenge with two awards: WEKA is the Winner of the 2025 ISV Rising Star Award and a Finalist for the ISV AI Innovation Award. These awards highlight the results customers achieve when deploying WEKA’s NeuralMesh™ on Oracle Cloud Infrastructure (OCI). Our engineering efforts with OCI are focused on providing a high-performance, scalable storage solution that solves critical I/O bottlenecks for the most demanding AI applications.

Why AI Leaders Choose to Run on NeuralMesh and OCI

The ISV Rising Star Award recognizes the significant adoption of NeuralMesh among AI leaders building their businesses on OCI. A growing number of AI pioneers are standardizing on NeuralMesh and OCI to accelerate their data pipelines and get more value from their GPU investments.

Our joint customers include:

  • AI Model Training and Inference like Cohere and Upstage, who require extreme performance to reduce model training times, accelerate time to first token, and build scalable token warehouses..
  • Robotics and Embodied AI companies like Physical Intelligence and Hillbot, whose work with complex simulation data demands a responsive, high-performance storage product.
  • Scientific Research organizations such as the Center for AI Safety (CAIS), who leverage the product to accelerate complex analysis and experimentation.

These organizations choose NeuralMesh on OCI because it provides the performance and scalability necessary to meet the demands of their production AI workloads.

Recognition for NeuralMesh Axon: A Solution that Eliminates AI Infrastructure Bottlenecks

Our finalist recognition for the ISV AI Innovation Award points to our newest product: NeuralMesh Axon™.

Axon transforms AI infrastructure by embedding a high-performance data mesh directly inside GPU servers. By co-locating data with the GPUs and harnessing local NVMe, spare CPU cores, and OCI’s high-speed RDMA networking, Axon creates a unified compute and data layer. This architecture minimizes data access latency and delivers superior throughput, enhancing the overall economics of the infrastructure stack.

This approach delivers several key benefits:

  • Massive GPU Acceleration: Achieve over 90% GPU utilization—triple the industry average—by ensuring GPUs are constantly fed with data, reducing infrastructure costs, power, and cooling requirements.
  • Linear Scalability: As new GPU instances are added to a cluster, the performance and capacity of NeuralMesh Axon scale with it automatically, eliminating architectural bottlenecks.
  • Expanded Memory for Inference: Seamlessly extend GPU memory for larger context windows and achieve up to 20x faster time-to-first-token (TTFT), enabling more efficient and responsive inference.

Simplified Operations: Streamline infrastructure by eliminating external storage systems. This allows teams to focus on building AI models, not managing complex storage environments.

Optimized AI Storage for OCI 

The tight integration between NeuralMesh Axon and OCI unlocks exponential value for customers. OCI’s bare metal instances provide the direct hardware access needed for NeuralMesh to deliver maximum performance, while the high-bandwidth, low-latency RoCE v2 cluster networking is essential for efficient, high-speed data sharing between nodes.

These Oracle awards recognize the results this work is delivering for customers. By focusing on performance, simplicity, and scale, NeuralMesh and NeuralMesh Axon on OCI provide a clear path forward for organizations moving AI from research into production.

To Learn More:

Learn More About NeuralMesh for OCI